00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v23.11" build number 203 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3704 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.043 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.044 The recommended git tool is: git 00:00:00.045 using credential 00000000-0000-0000-0000-000000000002 00:00:00.047 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.065 Fetching changes from the remote Git repository 00:00:00.071 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.096 Using shallow fetch with depth 1 00:00:00.096 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.096 > git --version # timeout=10 00:00:00.162 > git --version # 'git version 2.39.2' 00:00:00.162 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.199 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.199 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.933 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.945 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.956 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.956 > git config core.sparsecheckout # timeout=10 00:00:03.968 > git read-tree -mu HEAD # timeout=10 00:00:03.989 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.015 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.015 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.125 [Pipeline] Start of Pipeline 00:00:04.139 [Pipeline] library 00:00:04.140 Loading library shm_lib@master 00:00:04.141 Library shm_lib@master is cached. Copying from home. 00:00:04.159 [Pipeline] node 00:00:04.173 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.175 [Pipeline] { 00:00:04.186 [Pipeline] catchError 00:00:04.188 [Pipeline] { 00:00:04.213 [Pipeline] wrap 00:00:04.265 [Pipeline] { 00:00:04.271 [Pipeline] stage 00:00:04.272 [Pipeline] { (Prologue) 00:00:04.285 [Pipeline] echo 00:00:04.286 Node: VM-host-WFP7 00:00:04.290 [Pipeline] cleanWs 00:00:04.299 [WS-CLEANUP] Deleting project workspace... 00:00:04.299 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.305 [WS-CLEANUP] done 00:00:04.508 [Pipeline] setCustomBuildProperty 00:00:04.567 [Pipeline] httpRequest 00:00:05.084 [Pipeline] echo 00:00:05.086 Sorcerer 10.211.164.20 is alive 00:00:05.095 [Pipeline] retry 00:00:05.097 [Pipeline] { 00:00:05.111 [Pipeline] httpRequest 00:00:05.115 HttpMethod: GET 00:00:05.116 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.116 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.118 Response Code: HTTP/1.1 200 OK 00:00:05.118 Success: Status code 200 is in the accepted range: 200,404 00:00:05.119 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.550 [Pipeline] } 00:00:05.563 [Pipeline] // retry 00:00:05.569 [Pipeline] sh 00:00:05.849 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.867 [Pipeline] httpRequest 00:00:06.449 [Pipeline] echo 00:00:06.450 Sorcerer 10.211.164.20 is alive 00:00:06.459 [Pipeline] retry 00:00:06.461 [Pipeline] { 00:00:06.471 [Pipeline] httpRequest 00:00:06.475 HttpMethod: GET 00:00:06.475 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:06.476 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:06.484 Response Code: HTTP/1.1 200 OK 00:00:06.484 Success: Status code 200 is in the accepted range: 200,404 00:00:06.485 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:41.516 [Pipeline] } 00:00:41.536 [Pipeline] // retry 00:00:41.547 [Pipeline] sh 00:00:41.836 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:44.386 [Pipeline] sh 00:00:44.670 + git -C spdk log --oneline -n5 00:00:44.670 b18e1bd62 version: v24.09.1-pre 00:00:44.670 19524ad45 version: v24.09 00:00:44.670 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:44.670 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:44.670 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:44.689 [Pipeline] withCredentials 00:00:44.699 > git --version # timeout=10 00:00:44.711 > git --version # 'git version 2.39.2' 00:00:44.727 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:44.729 [Pipeline] { 00:00:44.741 [Pipeline] retry 00:00:44.742 [Pipeline] { 00:00:44.760 [Pipeline] sh 00:00:45.043 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:45.316 [Pipeline] } 00:00:45.333 [Pipeline] // retry 00:00:45.338 [Pipeline] } 00:00:45.355 [Pipeline] // withCredentials 00:00:45.365 [Pipeline] httpRequest 00:00:45.827 [Pipeline] echo 00:00:45.828 Sorcerer 10.211.164.20 is alive 00:00:45.836 [Pipeline] retry 00:00:45.837 [Pipeline] { 00:00:45.849 [Pipeline] httpRequest 00:00:45.853 HttpMethod: GET 00:00:45.854 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:45.854 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:45.862 Response Code: HTTP/1.1 200 OK 00:00:45.862 Success: Status code 200 is in the accepted range: 200,404 00:00:45.863 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:22.264 [Pipeline] } 00:01:22.282 [Pipeline] // retry 00:01:22.291 [Pipeline] sh 00:01:22.577 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:23.969 [Pipeline] sh 00:01:24.257 + git -C dpdk log --oneline -n5 00:01:24.257 eeb0605f11 version: 23.11.0 00:01:24.257 238778122a doc: update release notes for 23.11 00:01:24.257 46aa6b3cfc doc: fix description of RSS features 00:01:24.257 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:24.257 7e421ae345 devtools: support skipping forbid rule check 00:01:24.271 [Pipeline] writeFile 00:01:24.281 [Pipeline] sh 00:01:24.565 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:24.578 [Pipeline] sh 00:01:24.861 + cat autorun-spdk.conf 00:01:24.861 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.861 SPDK_RUN_ASAN=1 00:01:24.861 SPDK_RUN_UBSAN=1 00:01:24.861 SPDK_TEST_RAID=1 00:01:24.861 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:24.861 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:24.861 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.869 RUN_NIGHTLY=1 00:01:24.871 [Pipeline] } 00:01:24.884 [Pipeline] // stage 00:01:24.896 [Pipeline] stage 00:01:24.898 [Pipeline] { (Run VM) 00:01:24.909 [Pipeline] sh 00:01:25.194 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:25.194 + echo 'Start stage prepare_nvme.sh' 00:01:25.194 Start stage prepare_nvme.sh 00:01:25.194 + [[ -n 6 ]] 00:01:25.194 + disk_prefix=ex6 00:01:25.194 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:25.194 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:25.194 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:25.194 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.194 ++ SPDK_RUN_ASAN=1 00:01:25.194 ++ SPDK_RUN_UBSAN=1 00:01:25.194 ++ SPDK_TEST_RAID=1 00:01:25.194 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:25.194 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:25.194 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.194 ++ RUN_NIGHTLY=1 00:01:25.194 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:25.194 + nvme_files=() 00:01:25.194 + declare -A nvme_files 00:01:25.194 + backend_dir=/var/lib/libvirt/images/backends 00:01:25.194 + nvme_files['nvme.img']=5G 00:01:25.194 + nvme_files['nvme-cmb.img']=5G 00:01:25.194 + nvme_files['nvme-multi0.img']=4G 00:01:25.194 + nvme_files['nvme-multi1.img']=4G 00:01:25.194 + nvme_files['nvme-multi2.img']=4G 00:01:25.194 + nvme_files['nvme-openstack.img']=8G 00:01:25.194 + nvme_files['nvme-zns.img']=5G 00:01:25.194 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:25.194 + (( SPDK_TEST_FTL == 1 )) 00:01:25.194 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:25.194 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:25.194 + for nvme in "${!nvme_files[@]}" 00:01:25.194 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:25.194 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.194 + for nvme in "${!nvme_files[@]}" 00:01:25.194 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:25.194 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.194 + for nvme in "${!nvme_files[@]}" 00:01:25.194 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:25.194 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:25.194 + for nvme in "${!nvme_files[@]}" 00:01:25.194 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:25.194 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.194 + for nvme in "${!nvme_files[@]}" 00:01:25.194 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:25.194 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.194 + for nvme in "${!nvme_files[@]}" 00:01:25.194 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:25.194 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.194 + for nvme in "${!nvme_files[@]}" 00:01:25.194 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:25.454 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.454 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:25.454 + echo 'End stage prepare_nvme.sh' 00:01:25.454 End stage prepare_nvme.sh 00:01:25.466 [Pipeline] sh 00:01:25.750 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:25.750 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:01:25.750 00:01:25.750 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:25.750 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:25.750 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:25.750 HELP=0 00:01:25.750 DRY_RUN=0 00:01:25.750 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:25.750 NVME_DISKS_TYPE=nvme,nvme, 00:01:25.750 NVME_AUTO_CREATE=0 00:01:25.750 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:25.750 NVME_CMB=,, 00:01:25.750 NVME_PMR=,, 00:01:25.750 NVME_ZNS=,, 00:01:25.750 NVME_MS=,, 00:01:25.750 NVME_FDP=,, 00:01:25.750 SPDK_VAGRANT_DISTRO=fedora39 00:01:25.750 SPDK_VAGRANT_VMCPU=10 00:01:25.750 SPDK_VAGRANT_VMRAM=12288 00:01:25.750 SPDK_VAGRANT_PROVIDER=libvirt 00:01:25.750 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:25.750 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:25.750 SPDK_OPENSTACK_NETWORK=0 00:01:25.751 VAGRANT_PACKAGE_BOX=0 00:01:25.751 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:25.751 FORCE_DISTRO=true 00:01:25.751 VAGRANT_BOX_VERSION= 00:01:25.751 EXTRA_VAGRANTFILES= 00:01:25.751 NIC_MODEL=virtio 00:01:25.751 00:01:25.751 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:25.751 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:27.662 Bringing machine 'default' up with 'libvirt' provider... 00:01:27.922 ==> default: Creating image (snapshot of base box volume). 00:01:28.183 ==> default: Creating domain with the following settings... 00:01:28.183 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733538878_d591ca4c24c82a062209 00:01:28.183 ==> default: -- Domain type: kvm 00:01:28.183 ==> default: -- Cpus: 10 00:01:28.183 ==> default: -- Feature: acpi 00:01:28.183 ==> default: -- Feature: apic 00:01:28.183 ==> default: -- Feature: pae 00:01:28.183 ==> default: -- Memory: 12288M 00:01:28.183 ==> default: -- Memory Backing: hugepages: 00:01:28.183 ==> default: -- Management MAC: 00:01:28.183 ==> default: -- Loader: 00:01:28.183 ==> default: -- Nvram: 00:01:28.183 ==> default: -- Base box: spdk/fedora39 00:01:28.183 ==> default: -- Storage pool: default 00:01:28.183 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733538878_d591ca4c24c82a062209.img (20G) 00:01:28.183 ==> default: -- Volume Cache: default 00:01:28.183 ==> default: -- Kernel: 00:01:28.183 ==> default: -- Initrd: 00:01:28.183 ==> default: -- Graphics Type: vnc 00:01:28.183 ==> default: -- Graphics Port: -1 00:01:28.183 ==> default: -- Graphics IP: 127.0.0.1 00:01:28.183 ==> default: -- Graphics Password: Not defined 00:01:28.183 ==> default: -- Video Type: cirrus 00:01:28.183 ==> default: -- Video VRAM: 9216 00:01:28.183 ==> default: -- Sound Type: 00:01:28.183 ==> default: -- Keymap: en-us 00:01:28.183 ==> default: -- TPM Path: 00:01:28.183 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:28.183 ==> default: -- Command line args: 00:01:28.183 ==> default: -> value=-device, 00:01:28.183 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:28.183 ==> default: -> value=-drive, 00:01:28.183 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:28.183 ==> default: -> value=-device, 00:01:28.183 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.183 ==> default: -> value=-device, 00:01:28.183 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:28.183 ==> default: -> value=-drive, 00:01:28.183 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:28.183 ==> default: -> value=-device, 00:01:28.183 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.183 ==> default: -> value=-drive, 00:01:28.183 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:28.183 ==> default: -> value=-device, 00:01:28.183 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.183 ==> default: -> value=-drive, 00:01:28.183 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:28.183 ==> default: -> value=-device, 00:01:28.183 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.183 ==> default: Creating shared folders metadata... 00:01:28.183 ==> default: Starting domain. 00:01:30.094 ==> default: Waiting for domain to get an IP address... 00:01:48.193 ==> default: Waiting for SSH to become available... 00:01:48.193 ==> default: Configuring and enabling network interfaces... 00:01:53.561 default: SSH address: 192.168.121.183:22 00:01:53.561 default: SSH username: vagrant 00:01:53.561 default: SSH auth method: private key 00:01:56.100 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:04.223 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:09.498 ==> default: Mounting SSHFS shared folder... 00:02:12.038 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:12.038 ==> default: Checking Mount.. 00:02:13.420 ==> default: Folder Successfully Mounted! 00:02:13.420 ==> default: Running provisioner: file... 00:02:14.803 default: ~/.gitconfig => .gitconfig 00:02:15.062 00:02:15.062 SUCCESS! 00:02:15.062 00:02:15.062 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:15.062 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:15.062 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:15.063 00:02:15.076 [Pipeline] } 00:02:15.093 [Pipeline] // stage 00:02:15.102 [Pipeline] dir 00:02:15.103 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:15.104 [Pipeline] { 00:02:15.118 [Pipeline] catchError 00:02:15.119 [Pipeline] { 00:02:15.132 [Pipeline] sh 00:02:15.457 + vagrant ssh-config --host vagrant 00:02:15.457 + sed -ne /^Host/,$p 00:02:15.457 + tee ssh_conf 00:02:17.992 Host vagrant 00:02:17.992 HostName 192.168.121.183 00:02:17.992 User vagrant 00:02:17.992 Port 22 00:02:17.992 UserKnownHostsFile /dev/null 00:02:17.992 StrictHostKeyChecking no 00:02:17.992 PasswordAuthentication no 00:02:17.992 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:17.992 IdentitiesOnly yes 00:02:17.992 LogLevel FATAL 00:02:17.992 ForwardAgent yes 00:02:17.992 ForwardX11 yes 00:02:17.992 00:02:18.007 [Pipeline] withEnv 00:02:18.010 [Pipeline] { 00:02:18.023 [Pipeline] sh 00:02:18.307 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:18.307 source /etc/os-release 00:02:18.307 [[ -e /image.version ]] && img=$(< /image.version) 00:02:18.307 # Minimal, systemd-like check. 00:02:18.307 if [[ -e /.dockerenv ]]; then 00:02:18.307 # Clear garbage from the node's name: 00:02:18.307 # agt-er_autotest_547-896 -> autotest_547-896 00:02:18.307 # $HOSTNAME is the actual container id 00:02:18.307 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:18.307 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:18.307 # We can assume this is a mount from a host where container is running, 00:02:18.307 # so fetch its hostname to easily identify the target swarm worker. 00:02:18.307 container="$(< /etc/hostname) ($agent)" 00:02:18.307 else 00:02:18.307 # Fallback 00:02:18.307 container=$agent 00:02:18.307 fi 00:02:18.307 fi 00:02:18.307 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:18.307 00:02:18.582 [Pipeline] } 00:02:18.599 [Pipeline] // withEnv 00:02:18.609 [Pipeline] setCustomBuildProperty 00:02:18.625 [Pipeline] stage 00:02:18.628 [Pipeline] { (Tests) 00:02:18.646 [Pipeline] sh 00:02:18.931 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:19.205 [Pipeline] sh 00:02:19.489 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:19.762 [Pipeline] timeout 00:02:19.763 Timeout set to expire in 1 hr 30 min 00:02:19.765 [Pipeline] { 00:02:19.778 [Pipeline] sh 00:02:20.061 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:20.632 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:20.648 [Pipeline] sh 00:02:20.937 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:21.213 [Pipeline] sh 00:02:21.497 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:21.775 [Pipeline] sh 00:02:22.061 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:22.321 ++ readlink -f spdk_repo 00:02:22.321 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:22.321 + [[ -n /home/vagrant/spdk_repo ]] 00:02:22.321 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:22.321 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:22.321 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:22.321 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:22.321 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:22.321 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:22.321 + cd /home/vagrant/spdk_repo 00:02:22.321 + source /etc/os-release 00:02:22.321 ++ NAME='Fedora Linux' 00:02:22.321 ++ VERSION='39 (Cloud Edition)' 00:02:22.321 ++ ID=fedora 00:02:22.321 ++ VERSION_ID=39 00:02:22.321 ++ VERSION_CODENAME= 00:02:22.321 ++ PLATFORM_ID=platform:f39 00:02:22.321 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:22.321 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:22.321 ++ LOGO=fedora-logo-icon 00:02:22.321 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:22.321 ++ HOME_URL=https://fedoraproject.org/ 00:02:22.321 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:22.321 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:22.321 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:22.321 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:22.321 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:22.321 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:22.321 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:22.321 ++ SUPPORT_END=2024-11-12 00:02:22.321 ++ VARIANT='Cloud Edition' 00:02:22.321 ++ VARIANT_ID=cloud 00:02:22.321 + uname -a 00:02:22.321 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:22.321 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:22.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:22.892 Hugepages 00:02:22.892 node hugesize free / total 00:02:22.892 node0 1048576kB 0 / 0 00:02:22.892 node0 2048kB 0 / 0 00:02:22.892 00:02:22.892 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:22.892 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:22.892 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:22.892 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:22.892 + rm -f /tmp/spdk-ld-path 00:02:22.892 + source autorun-spdk.conf 00:02:22.892 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.892 ++ SPDK_RUN_ASAN=1 00:02:22.892 ++ SPDK_RUN_UBSAN=1 00:02:22.892 ++ SPDK_TEST_RAID=1 00:02:22.892 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:22.892 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:22.892 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.892 ++ RUN_NIGHTLY=1 00:02:22.892 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:22.892 + [[ -n '' ]] 00:02:22.892 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:22.892 + for M in /var/spdk/build-*-manifest.txt 00:02:22.892 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:22.892 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.892 + for M in /var/spdk/build-*-manifest.txt 00:02:22.892 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:22.892 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.892 + for M in /var/spdk/build-*-manifest.txt 00:02:22.892 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:22.892 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.892 ++ uname 00:02:22.892 + [[ Linux == \L\i\n\u\x ]] 00:02:22.892 + sudo dmesg -T 00:02:23.152 + sudo dmesg --clear 00:02:23.152 + dmesg_pid=6173 00:02:23.152 + [[ Fedora Linux == FreeBSD ]] 00:02:23.152 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:23.152 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:23.152 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:23.152 + sudo dmesg -Tw 00:02:23.152 + [[ -x /usr/src/fio-static/fio ]] 00:02:23.152 + export FIO_BIN=/usr/src/fio-static/fio 00:02:23.152 + FIO_BIN=/usr/src/fio-static/fio 00:02:23.152 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:23.152 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:23.152 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:23.152 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:23.152 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:23.153 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:23.153 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:23.153 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:23.153 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:23.153 Test configuration: 00:02:23.153 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.153 SPDK_RUN_ASAN=1 00:02:23.153 SPDK_RUN_UBSAN=1 00:02:23.153 SPDK_TEST_RAID=1 00:02:23.153 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:23.153 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:23.153 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.153 RUN_NIGHTLY=1 02:35:34 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:23.153 02:35:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:23.153 02:35:34 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:23.153 02:35:34 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:23.153 02:35:34 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.153 02:35:34 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.153 02:35:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.153 02:35:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.153 02:35:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.153 02:35:34 -- paths/export.sh@5 -- $ export PATH 00:02:23.153 02:35:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.153 02:35:34 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:23.153 02:35:34 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:23.153 02:35:34 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733538934.XXXXXX 00:02:23.153 02:35:34 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733538934.oyJjIO 00:02:23.153 02:35:34 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:23.153 02:35:34 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:02:23.153 02:35:34 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:23.153 02:35:34 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:23.153 02:35:34 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:23.153 02:35:34 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:23.153 02:35:34 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:23.153 02:35:34 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:23.153 02:35:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.153 02:35:34 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:23.153 02:35:34 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:23.153 02:35:34 -- pm/common@17 -- $ local monitor 00:02:23.153 02:35:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.153 02:35:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.153 02:35:34 -- pm/common@21 -- $ date +%s 00:02:23.153 02:35:34 -- pm/common@25 -- $ sleep 1 00:02:23.153 02:35:34 -- pm/common@21 -- $ date +%s 00:02:23.153 02:35:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733538934 00:02:23.153 02:35:34 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733538934 00:02:23.413 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733538934_collect-cpu-load.pm.log 00:02:23.413 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733538934_collect-vmstat.pm.log 00:02:24.354 02:35:35 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:24.354 02:35:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:24.355 02:35:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:24.355 02:35:35 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:24.355 02:35:35 -- spdk/autobuild.sh@16 -- $ date -u 00:02:24.355 Sat Dec 7 02:35:35 AM UTC 2024 00:02:24.355 02:35:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:24.355 v24.09-1-gb18e1bd62 00:02:24.355 02:35:35 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:24.355 02:35:35 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:24.355 02:35:35 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:24.355 02:35:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:24.355 02:35:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.355 ************************************ 00:02:24.355 START TEST asan 00:02:24.355 ************************************ 00:02:24.355 using asan 00:02:24.355 02:35:35 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:24.355 00:02:24.355 real 0m0.000s 00:02:24.355 user 0m0.000s 00:02:24.355 sys 0m0.000s 00:02:24.355 02:35:35 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:24.355 02:35:35 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.355 ************************************ 00:02:24.355 END TEST asan 00:02:24.355 ************************************ 00:02:24.355 02:35:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:24.355 02:35:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:24.355 02:35:35 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:24.355 02:35:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:24.355 02:35:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.355 ************************************ 00:02:24.355 START TEST ubsan 00:02:24.355 ************************************ 00:02:24.355 using ubsan 00:02:24.355 02:35:35 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:24.355 00:02:24.355 real 0m0.001s 00:02:24.355 user 0m0.000s 00:02:24.355 sys 0m0.000s 00:02:24.355 02:35:35 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:24.355 02:35:35 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.355 ************************************ 00:02:24.355 END TEST ubsan 00:02:24.355 ************************************ 00:02:24.355 02:35:35 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:24.355 02:35:35 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:24.355 02:35:35 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:24.355 02:35:35 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:24.355 02:35:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:24.355 02:35:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.355 ************************************ 00:02:24.355 START TEST build_native_dpdk 00:02:24.355 ************************************ 00:02:24.355 02:35:35 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:24.355 eeb0605f11 version: 23.11.0 00:02:24.355 238778122a doc: update release notes for 23.11 00:02:24.355 46aa6b3cfc doc: fix description of RSS features 00:02:24.355 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:24.355 7e421ae345 devtools: support skipping forbid rule check 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:24.355 02:35:35 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:24.355 02:35:35 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:24.614 02:35:35 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:24.614 patching file config/rte_config.h 00:02:24.614 Hunk #1 succeeded at 60 (offset 1 line). 00:02:24.614 02:35:35 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:24.614 02:35:35 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:24.615 02:35:35 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:24.615 patching file lib/pcapng/rte_pcapng.c 00:02:24.615 02:35:35 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:24.615 02:35:35 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:24.615 02:35:35 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:24.615 02:35:35 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:24.615 02:35:35 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:24.615 02:35:35 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:24.615 02:35:35 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:29.903 The Meson build system 00:02:29.903 Version: 1.5.0 00:02:29.903 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:29.903 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:29.903 Build type: native build 00:02:29.903 Program cat found: YES (/usr/bin/cat) 00:02:29.903 Project name: DPDK 00:02:29.903 Project version: 23.11.0 00:02:29.903 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:29.903 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:29.903 Host machine cpu family: x86_64 00:02:29.903 Host machine cpu: x86_64 00:02:29.903 Message: ## Building in Developer Mode ## 00:02:29.903 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:29.903 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:29.903 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:29.903 Program python3 found: YES (/usr/bin/python3) 00:02:29.903 Program cat found: YES (/usr/bin/cat) 00:02:29.903 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:29.903 Compiler for C supports arguments -march=native: YES 00:02:29.903 Checking for size of "void *" : 8 00:02:29.903 Checking for size of "void *" : 8 (cached) 00:02:29.903 Library m found: YES 00:02:29.903 Library numa found: YES 00:02:29.903 Has header "numaif.h" : YES 00:02:29.903 Library fdt found: NO 00:02:29.903 Library execinfo found: NO 00:02:29.903 Has header "execinfo.h" : YES 00:02:29.903 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:29.903 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:29.903 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:29.903 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:29.903 Run-time dependency openssl found: YES 3.1.1 00:02:29.903 Run-time dependency libpcap found: YES 1.10.4 00:02:29.903 Has header "pcap.h" with dependency libpcap: YES 00:02:29.903 Compiler for C supports arguments -Wcast-qual: YES 00:02:29.903 Compiler for C supports arguments -Wdeprecated: YES 00:02:29.903 Compiler for C supports arguments -Wformat: YES 00:02:29.903 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:29.903 Compiler for C supports arguments -Wformat-security: NO 00:02:29.903 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:29.903 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:29.903 Compiler for C supports arguments -Wnested-externs: YES 00:02:29.903 Compiler for C supports arguments -Wold-style-definition: YES 00:02:29.903 Compiler for C supports arguments -Wpointer-arith: YES 00:02:29.903 Compiler for C supports arguments -Wsign-compare: YES 00:02:29.903 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:29.903 Compiler for C supports arguments -Wundef: YES 00:02:29.903 Compiler for C supports arguments -Wwrite-strings: YES 00:02:29.903 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:29.903 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:29.903 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:29.903 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:29.903 Program objdump found: YES (/usr/bin/objdump) 00:02:29.903 Compiler for C supports arguments -mavx512f: YES 00:02:29.903 Checking if "AVX512 checking" compiles: YES 00:02:29.903 Fetching value of define "__SSE4_2__" : 1 00:02:29.903 Fetching value of define "__AES__" : 1 00:02:29.903 Fetching value of define "__AVX__" : 1 00:02:29.903 Fetching value of define "__AVX2__" : 1 00:02:29.903 Fetching value of define "__AVX512BW__" : 1 00:02:29.903 Fetching value of define "__AVX512CD__" : 1 00:02:29.903 Fetching value of define "__AVX512DQ__" : 1 00:02:29.903 Fetching value of define "__AVX512F__" : 1 00:02:29.903 Fetching value of define "__AVX512VL__" : 1 00:02:29.903 Fetching value of define "__PCLMUL__" : 1 00:02:29.903 Fetching value of define "__RDRND__" : 1 00:02:29.904 Fetching value of define "__RDSEED__" : 1 00:02:29.904 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:29.904 Fetching value of define "__znver1__" : (undefined) 00:02:29.904 Fetching value of define "__znver2__" : (undefined) 00:02:29.904 Fetching value of define "__znver3__" : (undefined) 00:02:29.904 Fetching value of define "__znver4__" : (undefined) 00:02:29.904 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:29.904 Message: lib/log: Defining dependency "log" 00:02:29.904 Message: lib/kvargs: Defining dependency "kvargs" 00:02:29.904 Message: lib/telemetry: Defining dependency "telemetry" 00:02:29.904 Checking for function "getentropy" : NO 00:02:29.904 Message: lib/eal: Defining dependency "eal" 00:02:29.904 Message: lib/ring: Defining dependency "ring" 00:02:29.904 Message: lib/rcu: Defining dependency "rcu" 00:02:29.904 Message: lib/mempool: Defining dependency "mempool" 00:02:29.904 Message: lib/mbuf: Defining dependency "mbuf" 00:02:29.904 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:29.904 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:29.904 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:29.904 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:29.904 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:29.904 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:29.904 Compiler for C supports arguments -mpclmul: YES 00:02:29.904 Compiler for C supports arguments -maes: YES 00:02:29.904 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:29.904 Compiler for C supports arguments -mavx512bw: YES 00:02:29.904 Compiler for C supports arguments -mavx512dq: YES 00:02:29.904 Compiler for C supports arguments -mavx512vl: YES 00:02:29.904 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:29.904 Compiler for C supports arguments -mavx2: YES 00:02:29.904 Compiler for C supports arguments -mavx: YES 00:02:29.904 Message: lib/net: Defining dependency "net" 00:02:29.904 Message: lib/meter: Defining dependency "meter" 00:02:29.904 Message: lib/ethdev: Defining dependency "ethdev" 00:02:29.904 Message: lib/pci: Defining dependency "pci" 00:02:29.904 Message: lib/cmdline: Defining dependency "cmdline" 00:02:29.904 Message: lib/metrics: Defining dependency "metrics" 00:02:29.904 Message: lib/hash: Defining dependency "hash" 00:02:29.904 Message: lib/timer: Defining dependency "timer" 00:02:29.904 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:29.904 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:29.904 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:29.904 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:29.904 Message: lib/acl: Defining dependency "acl" 00:02:29.904 Message: lib/bbdev: Defining dependency "bbdev" 00:02:29.904 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:29.904 Run-time dependency libelf found: YES 0.191 00:02:29.904 Message: lib/bpf: Defining dependency "bpf" 00:02:29.904 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:29.904 Message: lib/compressdev: Defining dependency "compressdev" 00:02:29.904 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:29.904 Message: lib/distributor: Defining dependency "distributor" 00:02:29.904 Message: lib/dmadev: Defining dependency "dmadev" 00:02:29.904 Message: lib/efd: Defining dependency "efd" 00:02:29.904 Message: lib/eventdev: Defining dependency "eventdev" 00:02:29.904 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:29.904 Message: lib/gpudev: Defining dependency "gpudev" 00:02:29.904 Message: lib/gro: Defining dependency "gro" 00:02:29.904 Message: lib/gso: Defining dependency "gso" 00:02:29.904 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:29.904 Message: lib/jobstats: Defining dependency "jobstats" 00:02:29.904 Message: lib/latencystats: Defining dependency "latencystats" 00:02:29.904 Message: lib/lpm: Defining dependency "lpm" 00:02:29.904 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:29.904 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:29.904 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:29.904 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:29.904 Message: lib/member: Defining dependency "member" 00:02:29.904 Message: lib/pcapng: Defining dependency "pcapng" 00:02:29.904 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:29.904 Message: lib/power: Defining dependency "power" 00:02:29.904 Message: lib/rawdev: Defining dependency "rawdev" 00:02:29.904 Message: lib/regexdev: Defining dependency "regexdev" 00:02:29.904 Message: lib/mldev: Defining dependency "mldev" 00:02:29.904 Message: lib/rib: Defining dependency "rib" 00:02:29.904 Message: lib/reorder: Defining dependency "reorder" 00:02:29.904 Message: lib/sched: Defining dependency "sched" 00:02:29.904 Message: lib/security: Defining dependency "security" 00:02:29.904 Message: lib/stack: Defining dependency "stack" 00:02:29.904 Has header "linux/userfaultfd.h" : YES 00:02:29.904 Has header "linux/vduse.h" : YES 00:02:29.904 Message: lib/vhost: Defining dependency "vhost" 00:02:29.904 Message: lib/ipsec: Defining dependency "ipsec" 00:02:29.904 Message: lib/pdcp: Defining dependency "pdcp" 00:02:29.904 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:29.904 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:29.904 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:29.904 Message: lib/fib: Defining dependency "fib" 00:02:29.904 Message: lib/port: Defining dependency "port" 00:02:29.904 Message: lib/pdump: Defining dependency "pdump" 00:02:29.904 Message: lib/table: Defining dependency "table" 00:02:29.904 Message: lib/pipeline: Defining dependency "pipeline" 00:02:29.904 Message: lib/graph: Defining dependency "graph" 00:02:29.904 Message: lib/node: Defining dependency "node" 00:02:29.904 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:29.904 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:29.904 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:31.288 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:31.288 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:31.288 Compiler for C supports arguments -Wno-unused-value: YES 00:02:31.288 Compiler for C supports arguments -Wno-format: YES 00:02:31.288 Compiler for C supports arguments -Wno-format-security: YES 00:02:31.288 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:31.288 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:31.288 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:31.288 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:31.288 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:31.288 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:31.288 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:31.288 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:31.288 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:31.288 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:31.288 Has header "sys/epoll.h" : YES 00:02:31.288 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:31.288 Configuring doxy-api-html.conf using configuration 00:02:31.288 Configuring doxy-api-man.conf using configuration 00:02:31.288 Program mandb found: YES (/usr/bin/mandb) 00:02:31.288 Program sphinx-build found: NO 00:02:31.288 Configuring rte_build_config.h using configuration 00:02:31.288 Message: 00:02:31.288 ================= 00:02:31.288 Applications Enabled 00:02:31.288 ================= 00:02:31.288 00:02:31.288 apps: 00:02:31.288 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:31.288 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:31.288 test-pmd, test-regex, test-sad, test-security-perf, 00:02:31.288 00:02:31.288 Message: 00:02:31.288 ================= 00:02:31.288 Libraries Enabled 00:02:31.288 ================= 00:02:31.288 00:02:31.288 libs: 00:02:31.288 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:31.288 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:31.288 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:31.288 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:31.288 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:31.288 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:31.288 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:31.288 00:02:31.288 00:02:31.288 Message: 00:02:31.288 =============== 00:02:31.288 Drivers Enabled 00:02:31.288 =============== 00:02:31.288 00:02:31.288 common: 00:02:31.288 00:02:31.288 bus: 00:02:31.288 pci, vdev, 00:02:31.288 mempool: 00:02:31.288 ring, 00:02:31.288 dma: 00:02:31.288 00:02:31.288 net: 00:02:31.288 i40e, 00:02:31.288 raw: 00:02:31.288 00:02:31.288 crypto: 00:02:31.288 00:02:31.288 compress: 00:02:31.288 00:02:31.288 regex: 00:02:31.288 00:02:31.288 ml: 00:02:31.288 00:02:31.288 vdpa: 00:02:31.288 00:02:31.288 event: 00:02:31.288 00:02:31.288 baseband: 00:02:31.288 00:02:31.288 gpu: 00:02:31.288 00:02:31.288 00:02:31.288 Message: 00:02:31.288 ================= 00:02:31.288 Content Skipped 00:02:31.288 ================= 00:02:31.288 00:02:31.288 apps: 00:02:31.288 00:02:31.288 libs: 00:02:31.288 00:02:31.288 drivers: 00:02:31.288 common/cpt: not in enabled drivers build config 00:02:31.288 common/dpaax: not in enabled drivers build config 00:02:31.288 common/iavf: not in enabled drivers build config 00:02:31.288 common/idpf: not in enabled drivers build config 00:02:31.288 common/mvep: not in enabled drivers build config 00:02:31.288 common/octeontx: not in enabled drivers build config 00:02:31.288 bus/auxiliary: not in enabled drivers build config 00:02:31.288 bus/cdx: not in enabled drivers build config 00:02:31.288 bus/dpaa: not in enabled drivers build config 00:02:31.288 bus/fslmc: not in enabled drivers build config 00:02:31.288 bus/ifpga: not in enabled drivers build config 00:02:31.288 bus/platform: not in enabled drivers build config 00:02:31.288 bus/vmbus: not in enabled drivers build config 00:02:31.288 common/cnxk: not in enabled drivers build config 00:02:31.288 common/mlx5: not in enabled drivers build config 00:02:31.288 common/nfp: not in enabled drivers build config 00:02:31.288 common/qat: not in enabled drivers build config 00:02:31.288 common/sfc_efx: not in enabled drivers build config 00:02:31.288 mempool/bucket: not in enabled drivers build config 00:02:31.288 mempool/cnxk: not in enabled drivers build config 00:02:31.288 mempool/dpaa: not in enabled drivers build config 00:02:31.288 mempool/dpaa2: not in enabled drivers build config 00:02:31.288 mempool/octeontx: not in enabled drivers build config 00:02:31.288 mempool/stack: not in enabled drivers build config 00:02:31.288 dma/cnxk: not in enabled drivers build config 00:02:31.288 dma/dpaa: not in enabled drivers build config 00:02:31.288 dma/dpaa2: not in enabled drivers build config 00:02:31.288 dma/hisilicon: not in enabled drivers build config 00:02:31.288 dma/idxd: not in enabled drivers build config 00:02:31.288 dma/ioat: not in enabled drivers build config 00:02:31.288 dma/skeleton: not in enabled drivers build config 00:02:31.288 net/af_packet: not in enabled drivers build config 00:02:31.288 net/af_xdp: not in enabled drivers build config 00:02:31.288 net/ark: not in enabled drivers build config 00:02:31.288 net/atlantic: not in enabled drivers build config 00:02:31.288 net/avp: not in enabled drivers build config 00:02:31.288 net/axgbe: not in enabled drivers build config 00:02:31.288 net/bnx2x: not in enabled drivers build config 00:02:31.288 net/bnxt: not in enabled drivers build config 00:02:31.288 net/bonding: not in enabled drivers build config 00:02:31.289 net/cnxk: not in enabled drivers build config 00:02:31.289 net/cpfl: not in enabled drivers build config 00:02:31.289 net/cxgbe: not in enabled drivers build config 00:02:31.289 net/dpaa: not in enabled drivers build config 00:02:31.289 net/dpaa2: not in enabled drivers build config 00:02:31.289 net/e1000: not in enabled drivers build config 00:02:31.289 net/ena: not in enabled drivers build config 00:02:31.289 net/enetc: not in enabled drivers build config 00:02:31.289 net/enetfec: not in enabled drivers build config 00:02:31.289 net/enic: not in enabled drivers build config 00:02:31.289 net/failsafe: not in enabled drivers build config 00:02:31.289 net/fm10k: not in enabled drivers build config 00:02:31.289 net/gve: not in enabled drivers build config 00:02:31.289 net/hinic: not in enabled drivers build config 00:02:31.289 net/hns3: not in enabled drivers build config 00:02:31.289 net/iavf: not in enabled drivers build config 00:02:31.289 net/ice: not in enabled drivers build config 00:02:31.289 net/idpf: not in enabled drivers build config 00:02:31.289 net/igc: not in enabled drivers build config 00:02:31.289 net/ionic: not in enabled drivers build config 00:02:31.289 net/ipn3ke: not in enabled drivers build config 00:02:31.289 net/ixgbe: not in enabled drivers build config 00:02:31.289 net/mana: not in enabled drivers build config 00:02:31.289 net/memif: not in enabled drivers build config 00:02:31.289 net/mlx4: not in enabled drivers build config 00:02:31.289 net/mlx5: not in enabled drivers build config 00:02:31.289 net/mvneta: not in enabled drivers build config 00:02:31.289 net/mvpp2: not in enabled drivers build config 00:02:31.289 net/netvsc: not in enabled drivers build config 00:02:31.289 net/nfb: not in enabled drivers build config 00:02:31.289 net/nfp: not in enabled drivers build config 00:02:31.289 net/ngbe: not in enabled drivers build config 00:02:31.289 net/null: not in enabled drivers build config 00:02:31.289 net/octeontx: not in enabled drivers build config 00:02:31.289 net/octeon_ep: not in enabled drivers build config 00:02:31.289 net/pcap: not in enabled drivers build config 00:02:31.289 net/pfe: not in enabled drivers build config 00:02:31.289 net/qede: not in enabled drivers build config 00:02:31.289 net/ring: not in enabled drivers build config 00:02:31.289 net/sfc: not in enabled drivers build config 00:02:31.289 net/softnic: not in enabled drivers build config 00:02:31.289 net/tap: not in enabled drivers build config 00:02:31.289 net/thunderx: not in enabled drivers build config 00:02:31.289 net/txgbe: not in enabled drivers build config 00:02:31.289 net/vdev_netvsc: not in enabled drivers build config 00:02:31.289 net/vhost: not in enabled drivers build config 00:02:31.289 net/virtio: not in enabled drivers build config 00:02:31.289 net/vmxnet3: not in enabled drivers build config 00:02:31.289 raw/cnxk_bphy: not in enabled drivers build config 00:02:31.289 raw/cnxk_gpio: not in enabled drivers build config 00:02:31.289 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:31.289 raw/ifpga: not in enabled drivers build config 00:02:31.289 raw/ntb: not in enabled drivers build config 00:02:31.289 raw/skeleton: not in enabled drivers build config 00:02:31.289 crypto/armv8: not in enabled drivers build config 00:02:31.289 crypto/bcmfs: not in enabled drivers build config 00:02:31.289 crypto/caam_jr: not in enabled drivers build config 00:02:31.289 crypto/ccp: not in enabled drivers build config 00:02:31.289 crypto/cnxk: not in enabled drivers build config 00:02:31.289 crypto/dpaa_sec: not in enabled drivers build config 00:02:31.289 crypto/dpaa2_sec: not in enabled drivers build config 00:02:31.289 crypto/ipsec_mb: not in enabled drivers build config 00:02:31.289 crypto/mlx5: not in enabled drivers build config 00:02:31.289 crypto/mvsam: not in enabled drivers build config 00:02:31.289 crypto/nitrox: not in enabled drivers build config 00:02:31.289 crypto/null: not in enabled drivers build config 00:02:31.289 crypto/octeontx: not in enabled drivers build config 00:02:31.289 crypto/openssl: not in enabled drivers build config 00:02:31.289 crypto/scheduler: not in enabled drivers build config 00:02:31.289 crypto/uadk: not in enabled drivers build config 00:02:31.289 crypto/virtio: not in enabled drivers build config 00:02:31.289 compress/isal: not in enabled drivers build config 00:02:31.289 compress/mlx5: not in enabled drivers build config 00:02:31.289 compress/octeontx: not in enabled drivers build config 00:02:31.289 compress/zlib: not in enabled drivers build config 00:02:31.289 regex/mlx5: not in enabled drivers build config 00:02:31.289 regex/cn9k: not in enabled drivers build config 00:02:31.289 ml/cnxk: not in enabled drivers build config 00:02:31.289 vdpa/ifc: not in enabled drivers build config 00:02:31.289 vdpa/mlx5: not in enabled drivers build config 00:02:31.289 vdpa/nfp: not in enabled drivers build config 00:02:31.289 vdpa/sfc: not in enabled drivers build config 00:02:31.289 event/cnxk: not in enabled drivers build config 00:02:31.289 event/dlb2: not in enabled drivers build config 00:02:31.289 event/dpaa: not in enabled drivers build config 00:02:31.289 event/dpaa2: not in enabled drivers build config 00:02:31.289 event/dsw: not in enabled drivers build config 00:02:31.289 event/opdl: not in enabled drivers build config 00:02:31.289 event/skeleton: not in enabled drivers build config 00:02:31.289 event/sw: not in enabled drivers build config 00:02:31.289 event/octeontx: not in enabled drivers build config 00:02:31.289 baseband/acc: not in enabled drivers build config 00:02:31.289 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:31.289 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:31.289 baseband/la12xx: not in enabled drivers build config 00:02:31.289 baseband/null: not in enabled drivers build config 00:02:31.289 baseband/turbo_sw: not in enabled drivers build config 00:02:31.289 gpu/cuda: not in enabled drivers build config 00:02:31.289 00:02:31.289 00:02:31.289 Build targets in project: 217 00:02:31.289 00:02:31.289 DPDK 23.11.0 00:02:31.289 00:02:31.289 User defined options 00:02:31.289 libdir : lib 00:02:31.289 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:31.289 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:31.289 c_link_args : 00:02:31.289 enable_docs : false 00:02:31.289 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:31.289 enable_kmods : false 00:02:31.289 machine : native 00:02:31.289 tests : false 00:02:31.289 00:02:31.289 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:31.289 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:31.289 02:35:42 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:31.289 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:31.549 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:31.549 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:31.549 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:31.549 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:31.550 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:31.550 [6/707] Linking static target lib/librte_kvargs.a 00:02:31.550 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:31.550 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:31.550 [9/707] Linking static target lib/librte_log.a 00:02:31.550 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:31.810 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.810 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:31.810 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:31.810 [14/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:31.810 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:31.810 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:31.810 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.068 [18/707] Linking target lib/librte_log.so.24.0 00:02:32.068 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:32.068 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:32.068 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:32.068 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:32.068 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:32.332 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:32.332 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:32.332 [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:32.332 [27/707] Linking target lib/librte_kvargs.so.24.0 00:02:32.332 [28/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:32.332 [29/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:32.332 [30/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:32.332 [31/707] Linking static target lib/librte_telemetry.a 00:02:32.333 [32/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:32.333 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:32.613 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:32.613 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:32.613 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:32.613 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:32.613 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:32.613 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:32.613 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:32.613 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:32.613 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.613 [43/707] Linking target lib/librte_telemetry.so.24.0 00:02:32.893 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:32.893 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:32.893 [46/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:32.893 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:32.893 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:32.893 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:32.894 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:33.154 [51/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:33.154 [52/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:33.154 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:33.154 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:33.154 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:33.154 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:33.154 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:33.154 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:33.154 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:33.414 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:33.414 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:33.414 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:33.414 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:33.414 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:33.414 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:33.415 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:33.415 [67/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:33.415 [68/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:33.675 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:33.675 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:33.675 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:33.675 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:33.675 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:33.675 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:33.675 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:33.675 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:33.675 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:33.675 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:33.935 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:33.935 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:33.935 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:33.935 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:34.195 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:34.195 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:34.195 [85/707] Linking static target lib/librte_ring.a 00:02:34.195 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:34.195 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:34.195 [88/707] Linking static target lib/librte_eal.a 00:02:34.195 [89/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.453 [90/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:34.453 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:34.453 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:34.453 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:34.453 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:34.453 [95/707] Linking static target lib/librte_mempool.a 00:02:34.711 [96/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:34.711 [97/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:34.711 [98/707] Linking static target lib/librte_rcu.a 00:02:34.711 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:34.711 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:34.712 [101/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:34.712 [102/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:34.712 [103/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:34.971 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:34.971 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.971 [106/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:34.971 [107/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.971 [108/707] Linking static target lib/librte_net.a 00:02:34.971 [109/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:34.971 [110/707] Linking static target lib/librte_mbuf.a 00:02:34.971 [111/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:34.971 [112/707] Linking static target lib/librte_meter.a 00:02:35.230 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:35.230 [114/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.230 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:35.230 [116/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.230 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:35.488 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:35.488 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.747 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:35.747 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:36.006 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:36.006 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:36.006 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:36.006 [125/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:36.006 [126/707] Linking static target lib/librte_pci.a 00:02:36.006 [127/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:36.006 [128/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:36.266 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:36.266 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:36.266 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:36.266 [132/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:36.266 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.266 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:36.266 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:36.266 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:36.266 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:36.266 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:36.266 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:36.266 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:36.526 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:36.526 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:36.526 [143/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:36.526 [144/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:36.526 [145/707] Linking static target lib/librte_cmdline.a 00:02:36.785 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:36.785 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:36.785 [148/707] Linking static target lib/librte_metrics.a 00:02:36.785 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:36.785 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:37.044 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.044 [152/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:37.044 [153/707] Linking static target lib/librte_timer.a 00:02:37.303 [154/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:37.303 [155/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.303 [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:37.303 [157/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.562 [158/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:37.562 [159/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:37.562 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:37.822 [161/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:37.822 [162/707] Linking static target lib/librte_bitratestats.a 00:02:38.081 [163/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:38.081 [164/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.081 [165/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:38.081 [166/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:38.081 [167/707] Linking static target lib/librte_bbdev.a 00:02:38.340 [168/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:38.600 [169/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:38.600 [170/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:38.600 [171/707] Linking static target lib/librte_hash.a 00:02:38.600 [172/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:38.600 [173/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.600 [174/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:38.600 [175/707] Linking static target lib/librte_ethdev.a 00:02:38.860 [176/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:38.860 [177/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:38.860 [178/707] Linking static target lib/acl/libavx2_tmp.a 00:02:38.860 [179/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:38.860 [180/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.121 [181/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:39.121 [182/707] Linking target lib/librte_eal.so.24.0 00:02:39.121 [183/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.121 [184/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:39.121 [185/707] Linking static target lib/librte_cfgfile.a 00:02:39.121 [186/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:39.121 [187/707] Linking target lib/librte_ring.so.24.0 00:02:39.121 [188/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:39.381 [189/707] Linking target lib/librte_meter.so.24.0 00:02:39.381 [190/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:39.381 [191/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:39.381 [192/707] Linking target lib/librte_rcu.so.24.0 00:02:39.381 [193/707] Linking target lib/librte_mempool.so.24.0 00:02:39.381 [194/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:39.381 [195/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:39.381 [196/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.381 [197/707] Linking target lib/librte_pci.so.24.0 00:02:39.381 [198/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:39.381 [199/707] Linking target lib/librte_timer.so.24.0 00:02:39.381 [200/707] Linking target lib/librte_cfgfile.so.24.0 00:02:39.381 [201/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:39.381 [202/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:39.381 [203/707] Linking target lib/librte_mbuf.so.24.0 00:02:39.640 [204/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:39.640 [205/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:39.640 [206/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:39.640 [207/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:39.640 [208/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:39.640 [209/707] Linking target lib/librte_net.so.24.0 00:02:39.640 [210/707] Linking target lib/librte_bbdev.so.24.0 00:02:39.640 [211/707] Linking static target lib/librte_bpf.a 00:02:39.640 [212/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:39.900 [213/707] Linking target lib/librte_cmdline.so.24.0 00:02:39.900 [214/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:39.900 [215/707] Linking target lib/librte_hash.so.24.0 00:02:39.900 [216/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:39.900 [217/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:39.900 [218/707] Linking static target lib/librte_compressdev.a 00:02:39.900 [219/707] Linking static target lib/librte_acl.a 00:02:39.900 [220/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.900 [221/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:39.900 [222/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:39.900 [223/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:40.165 [224/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.165 [225/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:40.165 [226/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:40.165 [227/707] Linking static target lib/librte_distributor.a 00:02:40.165 [228/707] Linking target lib/librte_acl.so.24.0 00:02:40.427 [229/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:40.427 [230/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.427 [231/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:40.427 [232/707] Linking target lib/librte_compressdev.so.24.0 00:02:40.427 [233/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:40.427 [234/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.427 [235/707] Linking target lib/librte_distributor.so.24.0 00:02:40.687 [236/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:40.687 [237/707] Linking static target lib/librte_dmadev.a 00:02:40.947 [238/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:40.947 [239/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:40.947 [240/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.947 [241/707] Linking target lib/librte_dmadev.so.24.0 00:02:41.207 [242/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:41.207 [243/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:41.207 [244/707] Linking static target lib/librte_efd.a 00:02:41.207 [245/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:41.207 [246/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.467 [247/707] Linking target lib/librte_efd.so.24.0 00:02:41.467 [248/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:41.467 [249/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:41.467 [250/707] Linking static target lib/librte_cryptodev.a 00:02:41.467 [251/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:41.727 [252/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:41.727 [253/707] Linking static target lib/librte_dispatcher.a 00:02:41.727 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:41.727 [255/707] Linking static target lib/librte_gpudev.a 00:02:41.986 [256/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:41.986 [257/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:41.986 [258/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:41.986 [259/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:41.986 [260/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.245 [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:42.505 [262/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:42.505 [263/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.505 [264/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:42.505 [265/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:42.505 [266/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:42.505 [267/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.506 [268/707] Linking target lib/librte_cryptodev.so.24.0 00:02:42.506 [269/707] Linking static target lib/librte_gro.a 00:02:42.506 [270/707] Linking target lib/librte_gpudev.so.24.0 00:02:42.506 [271/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:42.766 [272/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:42.766 [273/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:42.766 [274/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.766 [275/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:42.766 [276/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.766 [277/707] Linking static target lib/librte_eventdev.a 00:02:42.766 [278/707] Linking target lib/librte_ethdev.so.24.0 00:02:42.766 [279/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:42.766 [280/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:42.766 [281/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:42.766 [282/707] Linking static target lib/librte_gso.a 00:02:43.038 [283/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:43.038 [284/707] Linking target lib/librte_metrics.so.24.0 00:02:43.038 [285/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:43.038 [286/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.038 [287/707] Linking target lib/librte_bpf.so.24.0 00:02:43.038 [288/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:43.038 [289/707] Linking target lib/librte_gro.so.24.0 00:02:43.038 [290/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:43.038 [291/707] Linking target lib/librte_gso.so.24.0 00:02:43.038 [292/707] Linking target lib/librte_bitratestats.so.24.0 00:02:43.038 [293/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:43.038 [294/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:43.299 [295/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:43.299 [296/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:43.299 [297/707] Linking static target lib/librte_jobstats.a 00:02:43.299 [298/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:43.299 [299/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:43.299 [300/707] Linking static target lib/librte_ip_frag.a 00:02:43.559 [301/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.559 [302/707] Linking target lib/librte_jobstats.so.24.0 00:02:43.559 [303/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:43.559 [304/707] Linking static target lib/librte_latencystats.a 00:02:43.559 [305/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:43.559 [306/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.559 [307/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:43.559 [308/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:43.559 [309/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:43.559 [310/707] Linking target lib/librte_ip_frag.so.24.0 00:02:43.559 [311/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:43.819 [312/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:43.819 [313/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.819 [314/707] Linking target lib/librte_latencystats.so.24.0 00:02:43.819 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:43.819 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:43.819 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:43.819 [318/707] Linking static target lib/librte_lpm.a 00:02:44.079 [319/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:44.079 [320/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:44.079 [321/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:44.079 [322/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:44.079 [323/707] Linking static target lib/librte_pcapng.a 00:02:44.079 [324/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.338 [325/707] Linking target lib/librte_lpm.so.24.0 00:02:44.338 [326/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:44.338 [327/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:44.338 [328/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:44.338 [329/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:44.338 [330/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.338 [331/707] Linking target lib/librte_pcapng.so.24.0 00:02:44.338 [332/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:44.338 [333/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.597 [334/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:44.597 [335/707] Linking target lib/librte_eventdev.so.24.0 00:02:44.597 [336/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:44.597 [337/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:44.597 [338/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:44.597 [339/707] Linking target lib/librte_dispatcher.so.24.0 00:02:44.597 [340/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:44.856 [341/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:44.856 [342/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:44.856 [343/707] Linking static target lib/librte_regexdev.a 00:02:44.856 [344/707] Linking static target lib/librte_power.a 00:02:44.856 [345/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:44.856 [346/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:44.856 [347/707] Linking static target lib/librte_rawdev.a 00:02:44.856 [348/707] Linking static target lib/librte_member.a 00:02:44.856 [349/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:44.856 [350/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:44.856 [351/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:44.856 [352/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:45.117 [353/707] Linking static target lib/librte_mldev.a 00:02:45.117 [354/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.117 [355/707] Linking target lib/librte_member.so.24.0 00:02:45.117 [356/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:45.117 [357/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.117 [358/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:45.117 [359/707] Linking target lib/librte_rawdev.so.24.0 00:02:45.377 [360/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:45.377 [361/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.377 [362/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:45.377 [363/707] Linking static target lib/librte_reorder.a 00:02:45.377 [364/707] Linking target lib/librte_power.so.24.0 00:02:45.377 [365/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:45.377 [366/707] Linking static target lib/librte_rib.a 00:02:45.377 [367/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.377 [368/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:45.377 [369/707] Linking target lib/librte_regexdev.so.24.0 00:02:45.637 [370/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:45.637 [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:45.637 [372/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:45.637 [373/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.637 [374/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:45.637 [375/707] Linking target lib/librte_reorder.so.24.0 00:02:45.637 [376/707] Linking static target lib/librte_stack.a 00:02:45.897 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:45.897 [378/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.897 [379/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:45.897 [380/707] Linking static target lib/librte_security.a 00:02:45.897 [381/707] Linking target lib/librte_rib.so.24.0 00:02:45.897 [382/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.897 [383/707] Linking target lib/librte_stack.so.24.0 00:02:45.897 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:45.897 [385/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:45.897 [386/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:46.157 [387/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.157 [388/707] Linking target lib/librte_mldev.so.24.0 00:02:46.157 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.157 [390/707] Linking target lib/librte_security.so.24.0 00:02:46.157 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:46.157 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:46.157 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:46.157 [394/707] Linking static target lib/librte_sched.a 00:02:46.433 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:46.433 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:46.433 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.693 [398/707] Linking target lib/librte_sched.so.24.0 00:02:46.693 [399/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:46.693 [400/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:46.693 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:46.693 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:46.953 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:46.953 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:47.214 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:47.214 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:47.214 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:47.474 [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:47.474 [409/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:47.474 [410/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:47.474 [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:47.474 [412/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:47.474 [413/707] Linking static target lib/librte_ipsec.a 00:02:47.474 [414/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:47.734 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:47.734 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.994 [417/707] Linking target lib/librte_ipsec.so.24.0 00:02:47.994 [418/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:47.994 [419/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:47.994 [420/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:48.253 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:48.253 [422/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:48.253 [423/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:48.253 [424/707] Linking static target lib/librte_fib.a 00:02:48.253 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:48.513 [426/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.513 [427/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:48.513 [428/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:48.513 [429/707] Linking target lib/librte_fib.so.24.0 00:02:48.513 [430/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:48.513 [431/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:48.513 [432/707] Linking static target lib/librte_pdcp.a 00:02:48.773 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.773 [434/707] Linking target lib/librte_pdcp.so.24.0 00:02:49.034 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:49.034 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:49.034 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:49.034 [438/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:49.034 [439/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:49.294 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:49.294 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:49.554 [442/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:49.554 [443/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:49.554 [444/707] Linking static target lib/librte_port.a 00:02:49.554 [445/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:49.554 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:49.554 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:49.554 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:49.814 [449/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:49.814 [450/707] Linking static target lib/librte_pdump.a 00:02:49.814 [451/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:49.814 [452/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:49.814 [453/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.074 [454/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:50.074 [455/707] Linking target lib/librte_port.so.24.0 00:02:50.074 [456/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.074 [457/707] Linking target lib/librte_pdump.so.24.0 00:02:50.074 [458/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:50.334 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:50.334 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:50.334 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:50.334 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:50.334 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:50.334 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:50.594 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:50.854 [466/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:50.854 [467/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:50.854 [468/707] Linking static target lib/librte_table.a 00:02:50.854 [469/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:51.113 [470/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:51.113 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:51.113 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.376 [473/707] Linking target lib/librte_table.so.24.0 00:02:51.376 [474/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:51.376 [475/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:51.376 [476/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:51.376 [477/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:51.636 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:51.636 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:51.636 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:51.904 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:51.904 [482/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:51.904 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:52.188 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:52.188 [485/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:52.188 [486/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:52.188 [487/707] Linking static target lib/librte_graph.a 00:02:52.188 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:52.188 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:52.456 [490/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:52.725 [491/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.725 [492/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:52.725 [493/707] Linking target lib/librte_graph.so.24.0 00:02:52.725 [494/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:52.725 [495/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:52.985 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:52.985 [497/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:52.985 [498/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:52.985 [499/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:52.985 [500/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:53.245 [501/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:53.245 [502/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:53.245 [503/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:53.245 [504/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:53.504 [505/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:53.504 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:53.504 [507/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:53.504 [508/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:53.504 [509/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:53.504 [510/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:53.504 [511/707] Linking static target lib/librte_node.a 00:02:53.764 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:53.764 [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.764 [514/707] Linking target lib/librte_node.so.24.0 00:02:53.764 [515/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:53.764 [516/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:54.023 [517/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:54.023 [518/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:54.023 [519/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:54.023 [520/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.023 [521/707] Linking static target drivers/librte_bus_vdev.a 00:02:54.023 [522/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:54.023 [523/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.023 [524/707] Linking static target drivers/librte_bus_pci.a 00:02:54.289 [525/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:54.289 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:54.289 [527/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:54.289 [528/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.289 [529/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.289 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.289 [531/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:54.548 [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:54.548 [533/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:54.548 [534/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:54.548 [535/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:54.548 [536/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:54.548 [537/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.548 [538/707] Linking static target drivers/librte_mempool_ring.a 00:02:54.548 [539/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.548 [540/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:54.807 [541/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:54.807 [542/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:54.807 [543/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:54.807 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:55.067 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:55.327 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:55.327 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:55.587 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:55.847 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:55.847 [550/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:55.847 [551/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:56.107 [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:56.107 [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:56.107 [554/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:56.107 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:56.366 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:56.366 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:56.648 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:56.648 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:56.648 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:56.908 [561/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:56.908 [562/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:57.169 [563/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:57.169 [564/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:57.169 [565/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:57.429 [566/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:57.429 [567/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:57.429 [568/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:57.429 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:57.687 [570/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:57.687 [571/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:57.687 [572/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:57.687 [573/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:57.687 [574/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:57.687 [575/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:57.946 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:58.205 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:58.205 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:58.205 [579/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:58.205 [580/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:58.463 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:58.463 [582/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:58.463 [583/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:58.463 [584/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:58.722 [585/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:58.722 [586/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:58.722 [587/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:58.722 [588/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:58.722 [589/707] Linking static target drivers/librte_net_i40e.a 00:02:58.982 [590/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:58.982 [591/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:58.982 [592/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:58.982 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:59.242 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:59.242 [595/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:59.242 [596/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.242 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:59.242 [598/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:59.502 [599/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:59.762 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:59.762 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:59.762 [602/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:59.762 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:59.762 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:59.762 [605/707] Linking static target lib/librte_vhost.a 00:02:59.762 [606/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:59.762 [607/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:00.022 [608/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:00.022 [609/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:00.022 [610/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:00.282 [611/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:00.282 [612/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:00.282 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:00.282 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:00.543 [615/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:00.543 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:00.543 [617/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.803 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:00.803 [619/707] Linking target lib/librte_vhost.so.24.0 00:03:00.803 [620/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:01.374 [621/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:01.374 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:01.374 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:01.374 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:01.374 [625/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:01.374 [626/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:01.651 [627/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:01.651 [628/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:01.651 [629/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:01.651 [630/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:01.651 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:01.651 [632/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:01.912 [633/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:01.912 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:01.912 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:01.912 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:02.172 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:02.172 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:02.172 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:02.172 [640/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:02.432 [641/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:02.432 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:02.432 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:02.432 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:02.692 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:02.692 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:02.692 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:02.692 [648/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:02.692 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:02.692 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:02.964 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:03.227 [652/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:03.227 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:03.227 [654/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:03.486 [655/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:03.486 [656/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:03.486 [657/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:03.486 [658/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:03.486 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:03.746 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:03.746 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:04.005 [662/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:04.005 [663/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:04.005 [664/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:04.264 [665/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:04.264 [666/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:04.264 [667/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:04.522 [668/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:04.522 [669/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:04.522 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:04.782 [671/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:05.042 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:05.042 [673/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:05.303 [674/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:05.303 [675/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:05.563 [676/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:05.563 [677/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:05.563 [678/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:05.563 [679/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:05.823 [680/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:05.823 [681/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:05.823 [682/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:06.083 [683/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:06.083 [684/707] Linking static target lib/librte_pipeline.a 00:03:06.083 [685/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:06.344 [686/707] Linking target app/dpdk-graph 00:03:06.344 [687/707] Linking target app/dpdk-test-bbdev 00:03:06.344 [688/707] Linking target app/dpdk-dumpcap 00:03:06.344 [689/707] Linking target app/dpdk-test-cmdline 00:03:06.344 [690/707] Linking target app/dpdk-test-acl 00:03:06.604 [691/707] Linking target app/dpdk-proc-info 00:03:06.604 [692/707] Linking target app/dpdk-pdump 00:03:06.604 [693/707] Linking target app/dpdk-test-compress-perf 00:03:06.604 [694/707] Linking target app/dpdk-test-crypto-perf 00:03:06.865 [695/707] Linking target app/dpdk-test-fib 00:03:06.865 [696/707] Linking target app/dpdk-test-dma-perf 00:03:06.865 [697/707] Linking target app/dpdk-test-eventdev 00:03:06.865 [698/707] Linking target app/dpdk-test-gpudev 00:03:06.865 [699/707] Linking target app/dpdk-test-flow-perf 00:03:06.865 [700/707] Linking target app/dpdk-test-pipeline 00:03:06.865 [701/707] Linking target app/dpdk-test-mldev 00:03:06.865 [702/707] Linking target app/dpdk-test-regex 00:03:06.865 [703/707] Linking target app/dpdk-testpmd 00:03:07.124 [704/707] Linking target app/dpdk-test-sad 00:03:07.124 [705/707] Linking target app/dpdk-test-security-perf 00:03:12.411 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.411 [707/707] Linking target lib/librte_pipeline.so.24.0 00:03:12.411 02:36:22 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:12.411 02:36:22 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:12.411 02:36:22 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:12.411 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:12.411 [0/1] Installing files. 00:03:12.411 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.411 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.412 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:12.413 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.414 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.415 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:12.416 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:12.416 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.416 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:12.417 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:12.417 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:12.417 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.417 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:12.417 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.417 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.418 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.419 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:12.420 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:12.420 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:12.420 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:12.420 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:12.420 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:12.420 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:12.420 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:12.420 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:12.420 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:12.420 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:12.420 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:12.420 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:12.420 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:12.420 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:12.420 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:12.420 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:12.420 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:12.420 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:12.420 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:12.420 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:12.420 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:12.420 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:12.420 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:12.420 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:12.420 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:12.420 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:12.420 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:12.420 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:12.420 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:12.420 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:12.420 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:12.420 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:12.420 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:12.420 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:12.420 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:12.420 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:12.420 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:12.420 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:12.420 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:12.421 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:12.421 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:12.421 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:12.421 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:12.421 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:12.421 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:12.421 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:12.421 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:12.421 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:12.421 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:12.421 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:12.421 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:12.421 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:12.421 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:12.421 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:12.421 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:12.421 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:12.421 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:12.421 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:12.421 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:12.421 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:12.421 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:12.421 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:12.421 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:12.421 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:12.421 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:12.421 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:12.421 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:12.421 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:12.421 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:12.421 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:12.421 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:12.421 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:12.421 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:12.421 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:12.421 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:12.421 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:12.421 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:12.421 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:12.421 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:12.421 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:12.421 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:12.421 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:12.421 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:12.421 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:12.421 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:12.421 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:12.421 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:12.421 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:12.421 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:12.421 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:12.421 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:12.421 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:12.421 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:12.421 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:12.421 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:12.421 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:12.421 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:12.421 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:12.421 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:12.421 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:12.421 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:12.421 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:12.421 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:12.421 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:12.421 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:12.421 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:12.421 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:12.421 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:12.421 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:12.421 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:12.421 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:12.421 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:12.421 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:12.421 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:12.421 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:12.421 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:12.421 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:12.421 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:12.421 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:12.421 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:12.421 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:12.421 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:12.421 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:12.421 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:12.421 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:12.421 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:12.421 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:12.421 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:12.421 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:12.421 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:12.421 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:12.421 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:12.421 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:12.421 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:12.421 02:36:23 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:12.421 02:36:23 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:12.421 00:03:12.421 real 0m48.105s 00:03:12.421 user 4m58.351s 00:03:12.421 sys 0m57.356s 00:03:12.421 02:36:23 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:12.421 02:36:23 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:12.421 ************************************ 00:03:12.421 END TEST build_native_dpdk 00:03:12.421 ************************************ 00:03:12.681 02:36:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:12.681 02:36:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:12.681 02:36:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:12.681 02:36:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:12.681 02:36:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:12.681 02:36:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:12.681 02:36:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:12.681 02:36:23 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:12.681 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:12.681 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:12.681 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:12.681 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:13.250 Using 'verbs' RDMA provider 00:03:29.531 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:47.626 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:47.626 Creating mk/config.mk...done. 00:03:47.626 Creating mk/cc.flags.mk...done. 00:03:47.626 Type 'make' to build. 00:03:47.626 02:36:57 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:47.626 02:36:57 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:47.626 02:36:57 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:47.626 02:36:57 -- common/autotest_common.sh@10 -- $ set +x 00:03:47.626 ************************************ 00:03:47.626 START TEST make 00:03:47.626 ************************************ 00:03:47.626 02:36:57 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:47.626 make[1]: Nothing to be done for 'all'. 00:04:34.355 CC lib/ut_mock/mock.o 00:04:34.355 CC lib/ut/ut.o 00:04:34.355 CC lib/log/log.o 00:04:34.355 CC lib/log/log_flags.o 00:04:34.355 CC lib/log/log_deprecated.o 00:04:34.355 LIB libspdk_log.a 00:04:34.355 LIB libspdk_ut_mock.a 00:04:34.355 LIB libspdk_ut.a 00:04:34.355 SO libspdk_ut_mock.so.6.0 00:04:34.355 SO libspdk_log.so.7.0 00:04:34.355 SO libspdk_ut.so.2.0 00:04:34.355 SYMLINK libspdk_ut.so 00:04:34.355 SYMLINK libspdk_ut_mock.so 00:04:34.355 SYMLINK libspdk_log.so 00:04:34.355 CC lib/ioat/ioat.o 00:04:34.355 CC lib/dma/dma.o 00:04:34.355 CC lib/util/base64.o 00:04:34.355 CC lib/util/bit_array.o 00:04:34.355 CC lib/util/cpuset.o 00:04:34.355 CC lib/util/crc32.o 00:04:34.355 CC lib/util/crc16.o 00:04:34.355 CC lib/util/crc32c.o 00:04:34.355 CXX lib/trace_parser/trace.o 00:04:34.355 CC lib/vfio_user/host/vfio_user_pci.o 00:04:34.355 CC lib/vfio_user/host/vfio_user.o 00:04:34.355 CC lib/util/crc32_ieee.o 00:04:34.355 CC lib/util/crc64.o 00:04:34.355 LIB libspdk_dma.a 00:04:34.355 CC lib/util/dif.o 00:04:34.355 SO libspdk_dma.so.5.0 00:04:34.355 CC lib/util/fd.o 00:04:34.355 CC lib/util/fd_group.o 00:04:34.355 SYMLINK libspdk_dma.so 00:04:34.355 CC lib/util/file.o 00:04:34.355 LIB libspdk_ioat.a 00:04:34.355 CC lib/util/hexlify.o 00:04:34.355 SO libspdk_ioat.so.7.0 00:04:34.355 CC lib/util/iov.o 00:04:34.355 SYMLINK libspdk_ioat.so 00:04:34.355 CC lib/util/math.o 00:04:34.355 CC lib/util/net.o 00:04:34.355 LIB libspdk_vfio_user.a 00:04:34.355 CC lib/util/pipe.o 00:04:34.355 SO libspdk_vfio_user.so.5.0 00:04:34.355 CC lib/util/strerror_tls.o 00:04:34.355 CC lib/util/string.o 00:04:34.355 SYMLINK libspdk_vfio_user.so 00:04:34.355 CC lib/util/uuid.o 00:04:34.355 CC lib/util/xor.o 00:04:34.355 CC lib/util/zipf.o 00:04:34.355 CC lib/util/md5.o 00:04:34.355 LIB libspdk_util.a 00:04:34.355 SO libspdk_util.so.10.0 00:04:34.355 LIB libspdk_trace_parser.a 00:04:34.355 SYMLINK libspdk_util.so 00:04:34.355 SO libspdk_trace_parser.so.6.0 00:04:34.355 SYMLINK libspdk_trace_parser.so 00:04:34.355 CC lib/rdma_provider/common.o 00:04:34.355 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:34.355 CC lib/rdma_utils/rdma_utils.o 00:04:34.355 CC lib/conf/conf.o 00:04:34.355 CC lib/json/json_parse.o 00:04:34.355 CC lib/json/json_util.o 00:04:34.355 CC lib/env_dpdk/env.o 00:04:34.355 CC lib/env_dpdk/memory.o 00:04:34.355 CC lib/vmd/vmd.o 00:04:34.355 CC lib/idxd/idxd.o 00:04:34.355 CC lib/vmd/led.o 00:04:34.355 LIB libspdk_rdma_provider.a 00:04:34.355 SO libspdk_rdma_provider.so.6.0 00:04:34.355 CC lib/json/json_write.o 00:04:34.355 LIB libspdk_conf.a 00:04:34.355 CC lib/env_dpdk/pci.o 00:04:34.355 SO libspdk_conf.so.6.0 00:04:34.355 SYMLINK libspdk_rdma_provider.so 00:04:34.355 CC lib/idxd/idxd_user.o 00:04:34.355 LIB libspdk_rdma_utils.a 00:04:34.355 SO libspdk_rdma_utils.so.1.0 00:04:34.355 SYMLINK libspdk_conf.so 00:04:34.355 CC lib/env_dpdk/init.o 00:04:34.355 CC lib/env_dpdk/threads.o 00:04:34.355 SYMLINK libspdk_rdma_utils.so 00:04:34.355 CC lib/env_dpdk/pci_ioat.o 00:04:34.355 CC lib/env_dpdk/pci_virtio.o 00:04:34.355 CC lib/env_dpdk/pci_vmd.o 00:04:34.355 CC lib/env_dpdk/pci_idxd.o 00:04:34.355 LIB libspdk_json.a 00:04:34.355 SO libspdk_json.so.6.0 00:04:34.355 CC lib/env_dpdk/pci_event.o 00:04:34.355 CC lib/env_dpdk/sigbus_handler.o 00:04:34.355 CC lib/env_dpdk/pci_dpdk.o 00:04:34.355 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:34.355 SYMLINK libspdk_json.so 00:04:34.355 CC lib/idxd/idxd_kernel.o 00:04:34.355 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:34.355 LIB libspdk_vmd.a 00:04:34.355 SO libspdk_vmd.so.6.0 00:04:34.355 SYMLINK libspdk_vmd.so 00:04:34.355 LIB libspdk_idxd.a 00:04:34.355 CC lib/jsonrpc/jsonrpc_server.o 00:04:34.355 SO libspdk_idxd.so.12.1 00:04:34.355 CC lib/jsonrpc/jsonrpc_client.o 00:04:34.355 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:34.355 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:34.355 SYMLINK libspdk_idxd.so 00:04:34.355 LIB libspdk_jsonrpc.a 00:04:34.355 SO libspdk_jsonrpc.so.6.0 00:04:34.355 SYMLINK libspdk_jsonrpc.so 00:04:34.355 CC lib/rpc/rpc.o 00:04:34.355 LIB libspdk_env_dpdk.a 00:04:34.355 SO libspdk_env_dpdk.so.15.0 00:04:34.355 LIB libspdk_rpc.a 00:04:34.355 SO libspdk_rpc.so.6.0 00:04:34.355 SYMLINK libspdk_rpc.so 00:04:34.355 SYMLINK libspdk_env_dpdk.so 00:04:34.355 CC lib/notify/notify.o 00:04:34.355 CC lib/notify/notify_rpc.o 00:04:34.355 CC lib/keyring/keyring_rpc.o 00:04:34.355 CC lib/trace/trace.o 00:04:34.355 CC lib/keyring/keyring.o 00:04:34.355 CC lib/trace/trace_rpc.o 00:04:34.355 CC lib/trace/trace_flags.o 00:04:34.355 LIB libspdk_notify.a 00:04:34.355 SO libspdk_notify.so.6.0 00:04:34.355 LIB libspdk_keyring.a 00:04:34.355 SYMLINK libspdk_notify.so 00:04:34.355 LIB libspdk_trace.a 00:04:34.355 SO libspdk_keyring.so.2.0 00:04:34.355 SO libspdk_trace.so.11.0 00:04:34.355 SYMLINK libspdk_keyring.so 00:04:34.355 SYMLINK libspdk_trace.so 00:04:34.355 CC lib/sock/sock_rpc.o 00:04:34.355 CC lib/sock/sock.o 00:04:34.355 CC lib/thread/thread.o 00:04:34.355 CC lib/thread/iobuf.o 00:04:34.355 LIB libspdk_sock.a 00:04:34.355 SO libspdk_sock.so.10.0 00:04:34.355 SYMLINK libspdk_sock.so 00:04:34.355 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:34.355 CC lib/nvme/nvme_ctrlr.o 00:04:34.355 CC lib/nvme/nvme_fabric.o 00:04:34.355 CC lib/nvme/nvme_ns_cmd.o 00:04:34.355 CC lib/nvme/nvme_ns.o 00:04:34.355 CC lib/nvme/nvme_pcie_common.o 00:04:34.355 CC lib/nvme/nvme_pcie.o 00:04:34.355 CC lib/nvme/nvme.o 00:04:34.355 CC lib/nvme/nvme_qpair.o 00:04:34.355 CC lib/nvme/nvme_quirks.o 00:04:34.355 CC lib/nvme/nvme_transport.o 00:04:34.355 CC lib/nvme/nvme_discovery.o 00:04:34.355 LIB libspdk_thread.a 00:04:34.355 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:34.355 SO libspdk_thread.so.10.1 00:04:34.355 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:34.355 SYMLINK libspdk_thread.so 00:04:34.355 CC lib/nvme/nvme_tcp.o 00:04:34.355 CC lib/nvme/nvme_opal.o 00:04:34.355 CC lib/nvme/nvme_io_msg.o 00:04:34.355 CC lib/nvme/nvme_poll_group.o 00:04:34.355 CC lib/nvme/nvme_zns.o 00:04:34.614 CC lib/nvme/nvme_stubs.o 00:04:34.614 CC lib/nvme/nvme_auth.o 00:04:34.614 CC lib/nvme/nvme_cuse.o 00:04:34.873 CC lib/blob/blobstore.o 00:04:34.873 CC lib/accel/accel.o 00:04:34.873 CC lib/init/json_config.o 00:04:34.873 CC lib/nvme/nvme_rdma.o 00:04:35.132 CC lib/blob/request.o 00:04:35.132 CC lib/blob/zeroes.o 00:04:35.132 CC lib/init/subsystem.o 00:04:35.132 CC lib/blob/blob_bs_dev.o 00:04:35.390 CC lib/init/subsystem_rpc.o 00:04:35.390 CC lib/init/rpc.o 00:04:35.390 CC lib/accel/accel_rpc.o 00:04:35.390 CC lib/accel/accel_sw.o 00:04:35.390 LIB libspdk_init.a 00:04:35.649 SO libspdk_init.so.6.0 00:04:35.649 CC lib/virtio/virtio.o 00:04:35.649 CC lib/virtio/virtio_vhost_user.o 00:04:35.649 CC lib/fsdev/fsdev.o 00:04:35.649 CC lib/virtio/virtio_vfio_user.o 00:04:35.649 SYMLINK libspdk_init.so 00:04:35.649 CC lib/fsdev/fsdev_io.o 00:04:35.908 CC lib/fsdev/fsdev_rpc.o 00:04:35.908 CC lib/virtio/virtio_pci.o 00:04:36.168 LIB libspdk_accel.a 00:04:36.168 CC lib/event/reactor.o 00:04:36.168 SO libspdk_accel.so.16.0 00:04:36.168 CC lib/event/app_rpc.o 00:04:36.168 CC lib/event/app.o 00:04:36.168 CC lib/event/log_rpc.o 00:04:36.168 CC lib/event/scheduler_static.o 00:04:36.168 LIB libspdk_virtio.a 00:04:36.168 SYMLINK libspdk_accel.so 00:04:36.168 SO libspdk_virtio.so.7.0 00:04:36.168 LIB libspdk_fsdev.a 00:04:36.168 SYMLINK libspdk_virtio.so 00:04:36.427 SO libspdk_fsdev.so.1.0 00:04:36.427 LIB libspdk_nvme.a 00:04:36.427 SYMLINK libspdk_fsdev.so 00:04:36.427 CC lib/bdev/bdev.o 00:04:36.427 CC lib/bdev/bdev_zone.o 00:04:36.427 CC lib/bdev/bdev_rpc.o 00:04:36.427 CC lib/bdev/part.o 00:04:36.427 CC lib/bdev/scsi_nvme.o 00:04:36.427 SO libspdk_nvme.so.14.0 00:04:36.686 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:36.686 LIB libspdk_event.a 00:04:36.686 SO libspdk_event.so.14.0 00:04:36.686 SYMLINK libspdk_nvme.so 00:04:36.686 SYMLINK libspdk_event.so 00:04:37.253 LIB libspdk_fuse_dispatcher.a 00:04:37.253 SO libspdk_fuse_dispatcher.so.1.0 00:04:37.253 SYMLINK libspdk_fuse_dispatcher.so 00:04:38.191 LIB libspdk_blob.a 00:04:38.450 SO libspdk_blob.so.11.0 00:04:38.450 SYMLINK libspdk_blob.so 00:04:39.020 CC lib/lvol/lvol.o 00:04:39.020 CC lib/blobfs/tree.o 00:04:39.020 CC lib/blobfs/blobfs.o 00:04:39.020 LIB libspdk_bdev.a 00:04:39.020 SO libspdk_bdev.so.16.0 00:04:39.279 SYMLINK libspdk_bdev.so 00:04:39.538 CC lib/nvmf/ctrlr.o 00:04:39.538 CC lib/nvmf/subsystem.o 00:04:39.538 CC lib/nvmf/ctrlr_discovery.o 00:04:39.538 CC lib/nvmf/ctrlr_bdev.o 00:04:39.538 CC lib/ublk/ublk.o 00:04:39.538 CC lib/ftl/ftl_core.o 00:04:39.538 CC lib/nbd/nbd.o 00:04:39.538 CC lib/scsi/dev.o 00:04:39.797 CC lib/scsi/lun.o 00:04:39.797 LIB libspdk_blobfs.a 00:04:39.797 SO libspdk_blobfs.so.10.0 00:04:39.797 LIB libspdk_lvol.a 00:04:39.797 SO libspdk_lvol.so.10.0 00:04:39.797 CC lib/ftl/ftl_init.o 00:04:39.797 SYMLINK libspdk_blobfs.so 00:04:39.797 CC lib/ublk/ublk_rpc.o 00:04:39.797 SYMLINK libspdk_lvol.so 00:04:39.797 CC lib/nbd/nbd_rpc.o 00:04:39.797 CC lib/scsi/port.o 00:04:39.797 CC lib/nvmf/nvmf.o 00:04:40.057 CC lib/ftl/ftl_layout.o 00:04:40.057 CC lib/scsi/scsi.o 00:04:40.057 CC lib/ftl/ftl_debug.o 00:04:40.057 CC lib/ftl/ftl_io.o 00:04:40.057 LIB libspdk_nbd.a 00:04:40.057 SO libspdk_nbd.so.7.0 00:04:40.057 LIB libspdk_ublk.a 00:04:40.057 SO libspdk_ublk.so.3.0 00:04:40.057 SYMLINK libspdk_nbd.so 00:04:40.057 CC lib/nvmf/nvmf_rpc.o 00:04:40.057 CC lib/nvmf/transport.o 00:04:40.057 CC lib/scsi/scsi_bdev.o 00:04:40.316 SYMLINK libspdk_ublk.so 00:04:40.316 CC lib/scsi/scsi_pr.o 00:04:40.316 CC lib/scsi/scsi_rpc.o 00:04:40.316 CC lib/scsi/task.o 00:04:40.316 CC lib/ftl/ftl_sb.o 00:04:40.316 CC lib/ftl/ftl_l2p.o 00:04:40.316 CC lib/ftl/ftl_l2p_flat.o 00:04:40.575 CC lib/nvmf/tcp.o 00:04:40.575 CC lib/nvmf/stubs.o 00:04:40.575 CC lib/nvmf/mdns_server.o 00:04:40.575 CC lib/ftl/ftl_nv_cache.o 00:04:40.834 LIB libspdk_scsi.a 00:04:40.834 CC lib/nvmf/rdma.o 00:04:40.834 CC lib/ftl/ftl_band.o 00:04:40.834 SO libspdk_scsi.so.9.0 00:04:40.834 SYMLINK libspdk_scsi.so 00:04:40.834 CC lib/ftl/ftl_band_ops.o 00:04:40.834 CC lib/nvmf/auth.o 00:04:41.093 CC lib/ftl/ftl_writer.o 00:04:41.093 CC lib/iscsi/conn.o 00:04:41.093 CC lib/ftl/ftl_rq.o 00:04:41.093 CC lib/vhost/vhost.o 00:04:41.093 CC lib/iscsi/init_grp.o 00:04:41.353 CC lib/ftl/ftl_reloc.o 00:04:41.353 CC lib/vhost/vhost_rpc.o 00:04:41.353 CC lib/vhost/vhost_scsi.o 00:04:41.353 CC lib/iscsi/iscsi.o 00:04:41.611 CC lib/iscsi/param.o 00:04:41.611 CC lib/ftl/ftl_l2p_cache.o 00:04:41.612 CC lib/ftl/ftl_p2l.o 00:04:41.612 CC lib/vhost/vhost_blk.o 00:04:41.870 CC lib/vhost/rte_vhost_user.o 00:04:41.870 CC lib/ftl/ftl_p2l_log.o 00:04:41.870 CC lib/iscsi/portal_grp.o 00:04:41.870 CC lib/ftl/mngt/ftl_mngt.o 00:04:42.128 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:42.128 CC lib/iscsi/tgt_node.o 00:04:42.128 CC lib/iscsi/iscsi_subsystem.o 00:04:42.128 CC lib/iscsi/iscsi_rpc.o 00:04:42.128 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:42.387 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:42.387 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:42.387 CC lib/iscsi/task.o 00:04:42.387 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:42.387 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:42.387 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:42.646 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:42.646 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:42.646 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:42.646 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:42.646 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:42.646 CC lib/ftl/utils/ftl_conf.o 00:04:42.646 CC lib/ftl/utils/ftl_md.o 00:04:42.905 CC lib/ftl/utils/ftl_mempool.o 00:04:42.905 LIB libspdk_vhost.a 00:04:42.905 CC lib/ftl/utils/ftl_bitmap.o 00:04:42.905 LIB libspdk_iscsi.a 00:04:42.905 CC lib/ftl/utils/ftl_property.o 00:04:42.905 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:42.905 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:42.905 SO libspdk_vhost.so.8.0 00:04:42.905 SO libspdk_iscsi.so.8.0 00:04:42.905 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:42.905 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:42.905 SYMLINK libspdk_vhost.so 00:04:42.905 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:42.905 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:42.905 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:42.905 SYMLINK libspdk_iscsi.so 00:04:42.905 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:42.905 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:43.163 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:43.163 LIB libspdk_nvmf.a 00:04:43.163 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:43.163 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:43.163 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:43.163 CC lib/ftl/base/ftl_base_dev.o 00:04:43.163 CC lib/ftl/base/ftl_base_bdev.o 00:04:43.163 CC lib/ftl/ftl_trace.o 00:04:43.163 SO libspdk_nvmf.so.19.0 00:04:43.422 LIB libspdk_ftl.a 00:04:43.422 SYMLINK libspdk_nvmf.so 00:04:43.681 SO libspdk_ftl.so.9.0 00:04:43.940 SYMLINK libspdk_ftl.so 00:04:44.197 CC module/env_dpdk/env_dpdk_rpc.o 00:04:44.454 CC module/keyring/linux/keyring.o 00:04:44.454 CC module/keyring/file/keyring.o 00:04:44.454 CC module/accel/error/accel_error.o 00:04:44.454 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:44.454 CC module/scheduler/gscheduler/gscheduler.o 00:04:44.454 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:44.454 CC module/fsdev/aio/fsdev_aio.o 00:04:44.454 CC module/sock/posix/posix.o 00:04:44.454 CC module/blob/bdev/blob_bdev.o 00:04:44.454 LIB libspdk_env_dpdk_rpc.a 00:04:44.454 SO libspdk_env_dpdk_rpc.so.6.0 00:04:44.454 SYMLINK libspdk_env_dpdk_rpc.so 00:04:44.454 CC module/keyring/linux/keyring_rpc.o 00:04:44.454 CC module/keyring/file/keyring_rpc.o 00:04:44.454 LIB libspdk_scheduler_dpdk_governor.a 00:04:44.454 LIB libspdk_scheduler_gscheduler.a 00:04:44.454 CC module/accel/error/accel_error_rpc.o 00:04:44.454 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:44.454 SO libspdk_scheduler_gscheduler.so.4.0 00:04:44.454 LIB libspdk_scheduler_dynamic.a 00:04:44.711 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:44.711 SO libspdk_scheduler_dynamic.so.4.0 00:04:44.711 SYMLINK libspdk_scheduler_gscheduler.so 00:04:44.711 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:44.711 CC module/fsdev/aio/linux_aio_mgr.o 00:04:44.711 LIB libspdk_keyring_linux.a 00:04:44.711 LIB libspdk_keyring_file.a 00:04:44.711 CC module/accel/ioat/accel_ioat.o 00:04:44.711 LIB libspdk_blob_bdev.a 00:04:44.711 SYMLINK libspdk_scheduler_dynamic.so 00:04:44.712 SO libspdk_keyring_linux.so.1.0 00:04:44.712 SO libspdk_keyring_file.so.2.0 00:04:44.712 SO libspdk_blob_bdev.so.11.0 00:04:44.712 LIB libspdk_accel_error.a 00:04:44.712 SYMLINK libspdk_keyring_linux.so 00:04:44.712 SO libspdk_accel_error.so.2.0 00:04:44.712 SYMLINK libspdk_blob_bdev.so 00:04:44.712 CC module/accel/ioat/accel_ioat_rpc.o 00:04:44.712 SYMLINK libspdk_keyring_file.so 00:04:44.712 SYMLINK libspdk_accel_error.so 00:04:44.970 LIB libspdk_accel_ioat.a 00:04:44.970 CC module/accel/dsa/accel_dsa.o 00:04:44.970 CC module/accel/iaa/accel_iaa.o 00:04:44.970 SO libspdk_accel_ioat.so.6.0 00:04:44.970 SYMLINK libspdk_accel_ioat.so 00:04:44.970 CC module/accel/iaa/accel_iaa_rpc.o 00:04:44.970 CC module/bdev/error/vbdev_error.o 00:04:44.970 CC module/bdev/delay/vbdev_delay.o 00:04:44.970 CC module/bdev/lvol/vbdev_lvol.o 00:04:44.970 CC module/bdev/gpt/gpt.o 00:04:44.970 CC module/blobfs/bdev/blobfs_bdev.o 00:04:44.970 LIB libspdk_fsdev_aio.a 00:04:44.970 SO libspdk_fsdev_aio.so.1.0 00:04:44.970 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:44.970 LIB libspdk_accel_iaa.a 00:04:44.970 SO libspdk_accel_iaa.so.3.0 00:04:45.228 SYMLINK libspdk_fsdev_aio.so 00:04:45.228 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:45.228 CC module/accel/dsa/accel_dsa_rpc.o 00:04:45.228 CC module/bdev/error/vbdev_error_rpc.o 00:04:45.228 LIB libspdk_sock_posix.a 00:04:45.228 SYMLINK libspdk_accel_iaa.so 00:04:45.228 CC module/bdev/gpt/vbdev_gpt.o 00:04:45.228 SO libspdk_sock_posix.so.6.0 00:04:45.228 LIB libspdk_blobfs_bdev.a 00:04:45.228 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:45.228 SO libspdk_blobfs_bdev.so.6.0 00:04:45.228 LIB libspdk_accel_dsa.a 00:04:45.228 SYMLINK libspdk_sock_posix.so 00:04:45.228 SO libspdk_accel_dsa.so.5.0 00:04:45.228 SYMLINK libspdk_blobfs_bdev.so 00:04:45.228 CC module/bdev/malloc/bdev_malloc.o 00:04:45.228 LIB libspdk_bdev_error.a 00:04:45.228 SYMLINK libspdk_accel_dsa.so 00:04:45.228 SO libspdk_bdev_error.so.6.0 00:04:45.486 LIB libspdk_bdev_delay.a 00:04:45.486 SYMLINK libspdk_bdev_error.so 00:04:45.486 SO libspdk_bdev_delay.so.6.0 00:04:45.486 CC module/bdev/null/bdev_null.o 00:04:45.486 CC module/bdev/nvme/bdev_nvme.o 00:04:45.486 LIB libspdk_bdev_gpt.a 00:04:45.486 LIB libspdk_bdev_lvol.a 00:04:45.486 CC module/bdev/passthru/vbdev_passthru.o 00:04:45.486 SO libspdk_bdev_gpt.so.6.0 00:04:45.486 SYMLINK libspdk_bdev_delay.so 00:04:45.486 CC module/bdev/raid/bdev_raid.o 00:04:45.486 CC module/bdev/raid/bdev_raid_rpc.o 00:04:45.486 SO libspdk_bdev_lvol.so.6.0 00:04:45.486 CC module/bdev/split/vbdev_split.o 00:04:45.486 SYMLINK libspdk_bdev_gpt.so 00:04:45.486 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:45.486 SYMLINK libspdk_bdev_lvol.so 00:04:45.486 CC module/bdev/raid/bdev_raid_sb.o 00:04:45.486 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:45.743 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:45.743 CC module/bdev/null/bdev_null_rpc.o 00:04:45.743 CC module/bdev/raid/raid0.o 00:04:45.743 CC module/bdev/split/vbdev_split_rpc.o 00:04:45.743 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:45.743 LIB libspdk_bdev_malloc.a 00:04:45.743 LIB libspdk_bdev_null.a 00:04:45.743 SO libspdk_bdev_malloc.so.6.0 00:04:45.743 SO libspdk_bdev_null.so.6.0 00:04:46.001 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:46.001 SYMLINK libspdk_bdev_malloc.so 00:04:46.001 CC module/bdev/nvme/nvme_rpc.o 00:04:46.001 SYMLINK libspdk_bdev_null.so 00:04:46.001 LIB libspdk_bdev_passthru.a 00:04:46.001 LIB libspdk_bdev_split.a 00:04:46.001 SO libspdk_bdev_passthru.so.6.0 00:04:46.001 SO libspdk_bdev_split.so.6.0 00:04:46.001 CC module/bdev/raid/raid1.o 00:04:46.001 SYMLINK libspdk_bdev_passthru.so 00:04:46.001 SYMLINK libspdk_bdev_split.so 00:04:46.001 CC module/bdev/aio/bdev_aio.o 00:04:46.001 LIB libspdk_bdev_zone_block.a 00:04:46.001 SO libspdk_bdev_zone_block.so.6.0 00:04:46.001 CC module/bdev/ftl/bdev_ftl.o 00:04:46.001 CC module/bdev/nvme/bdev_mdns_client.o 00:04:46.258 CC module/bdev/iscsi/bdev_iscsi.o 00:04:46.258 SYMLINK libspdk_bdev_zone_block.so 00:04:46.258 CC module/bdev/raid/concat.o 00:04:46.258 CC module/bdev/raid/raid5f.o 00:04:46.258 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:46.258 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:46.258 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:46.258 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:46.258 CC module/bdev/aio/bdev_aio_rpc.o 00:04:46.515 CC module/bdev/nvme/vbdev_opal.o 00:04:46.515 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:46.515 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:46.515 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:46.515 LIB libspdk_bdev_aio.a 00:04:46.515 LIB libspdk_bdev_ftl.a 00:04:46.515 SO libspdk_bdev_aio.so.6.0 00:04:46.515 SO libspdk_bdev_ftl.so.6.0 00:04:46.515 SYMLINK libspdk_bdev_aio.so 00:04:46.515 SYMLINK libspdk_bdev_ftl.so 00:04:46.515 LIB libspdk_bdev_iscsi.a 00:04:46.773 LIB libspdk_bdev_raid.a 00:04:46.774 SO libspdk_bdev_iscsi.so.6.0 00:04:46.774 LIB libspdk_bdev_virtio.a 00:04:46.774 SO libspdk_bdev_raid.so.6.0 00:04:46.774 SYMLINK libspdk_bdev_iscsi.so 00:04:46.774 SO libspdk_bdev_virtio.so.6.0 00:04:46.774 SYMLINK libspdk_bdev_raid.so 00:04:46.774 SYMLINK libspdk_bdev_virtio.so 00:04:47.709 LIB libspdk_bdev_nvme.a 00:04:47.967 SO libspdk_bdev_nvme.so.7.0 00:04:47.967 SYMLINK libspdk_bdev_nvme.so 00:04:48.533 CC module/event/subsystems/fsdev/fsdev.o 00:04:48.533 CC module/event/subsystems/iobuf/iobuf.o 00:04:48.533 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:48.533 CC module/event/subsystems/vmd/vmd.o 00:04:48.533 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:48.533 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:48.533 CC module/event/subsystems/scheduler/scheduler.o 00:04:48.533 CC module/event/subsystems/keyring/keyring.o 00:04:48.533 CC module/event/subsystems/sock/sock.o 00:04:48.792 LIB libspdk_event_vmd.a 00:04:48.792 LIB libspdk_event_fsdev.a 00:04:48.792 LIB libspdk_event_scheduler.a 00:04:48.792 LIB libspdk_event_vhost_blk.a 00:04:48.792 LIB libspdk_event_iobuf.a 00:04:48.792 LIB libspdk_event_keyring.a 00:04:48.792 LIB libspdk_event_sock.a 00:04:48.792 SO libspdk_event_fsdev.so.1.0 00:04:48.792 SO libspdk_event_vmd.so.6.0 00:04:48.792 SO libspdk_event_vhost_blk.so.3.0 00:04:48.792 SO libspdk_event_scheduler.so.4.0 00:04:48.792 SO libspdk_event_sock.so.5.0 00:04:48.792 SO libspdk_event_iobuf.so.3.0 00:04:48.792 SO libspdk_event_keyring.so.1.0 00:04:48.792 SYMLINK libspdk_event_fsdev.so 00:04:48.792 SYMLINK libspdk_event_vmd.so 00:04:48.792 SYMLINK libspdk_event_scheduler.so 00:04:48.792 SYMLINK libspdk_event_vhost_blk.so 00:04:48.792 SYMLINK libspdk_event_sock.so 00:04:48.792 SYMLINK libspdk_event_keyring.so 00:04:48.792 SYMLINK libspdk_event_iobuf.so 00:04:49.051 CC module/event/subsystems/accel/accel.o 00:04:49.311 LIB libspdk_event_accel.a 00:04:49.311 SO libspdk_event_accel.so.6.0 00:04:49.311 SYMLINK libspdk_event_accel.so 00:04:49.881 CC module/event/subsystems/bdev/bdev.o 00:04:49.881 LIB libspdk_event_bdev.a 00:04:49.881 SO libspdk_event_bdev.so.6.0 00:04:50.144 SYMLINK libspdk_event_bdev.so 00:04:50.402 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:50.402 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:50.402 CC module/event/subsystems/ublk/ublk.o 00:04:50.402 CC module/event/subsystems/scsi/scsi.o 00:04:50.402 CC module/event/subsystems/nbd/nbd.o 00:04:50.661 LIB libspdk_event_nbd.a 00:04:50.661 LIB libspdk_event_nvmf.a 00:04:50.661 LIB libspdk_event_ublk.a 00:04:50.661 LIB libspdk_event_scsi.a 00:04:50.661 SO libspdk_event_nbd.so.6.0 00:04:50.661 SO libspdk_event_ublk.so.3.0 00:04:50.661 SO libspdk_event_nvmf.so.6.0 00:04:50.661 SO libspdk_event_scsi.so.6.0 00:04:50.661 SYMLINK libspdk_event_ublk.so 00:04:50.661 SYMLINK libspdk_event_nbd.so 00:04:50.661 SYMLINK libspdk_event_nvmf.so 00:04:50.661 SYMLINK libspdk_event_scsi.so 00:04:50.929 CC module/event/subsystems/iscsi/iscsi.o 00:04:50.929 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:51.216 LIB libspdk_event_iscsi.a 00:04:51.216 LIB libspdk_event_vhost_scsi.a 00:04:51.216 SO libspdk_event_iscsi.so.6.0 00:04:51.216 SO libspdk_event_vhost_scsi.so.3.0 00:04:51.216 SYMLINK libspdk_event_iscsi.so 00:04:51.216 SYMLINK libspdk_event_vhost_scsi.so 00:04:51.485 SO libspdk.so.6.0 00:04:51.485 SYMLINK libspdk.so 00:04:51.744 CC test/rpc_client/rpc_client_test.o 00:04:51.744 TEST_HEADER include/spdk/accel.h 00:04:51.744 TEST_HEADER include/spdk/accel_module.h 00:04:51.744 TEST_HEADER include/spdk/assert.h 00:04:51.744 TEST_HEADER include/spdk/barrier.h 00:04:51.744 TEST_HEADER include/spdk/base64.h 00:04:51.744 CXX app/trace/trace.o 00:04:51.744 TEST_HEADER include/spdk/bdev.h 00:04:51.744 TEST_HEADER include/spdk/bdev_module.h 00:04:51.744 TEST_HEADER include/spdk/bdev_zone.h 00:04:51.744 TEST_HEADER include/spdk/bit_array.h 00:04:51.744 TEST_HEADER include/spdk/bit_pool.h 00:04:51.744 TEST_HEADER include/spdk/blob_bdev.h 00:04:51.744 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:51.744 TEST_HEADER include/spdk/blobfs.h 00:04:51.744 TEST_HEADER include/spdk/blob.h 00:04:51.744 TEST_HEADER include/spdk/conf.h 00:04:51.744 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:51.744 TEST_HEADER include/spdk/config.h 00:04:51.744 TEST_HEADER include/spdk/cpuset.h 00:04:51.744 TEST_HEADER include/spdk/crc16.h 00:04:51.744 TEST_HEADER include/spdk/crc32.h 00:04:51.744 TEST_HEADER include/spdk/crc64.h 00:04:51.744 TEST_HEADER include/spdk/dif.h 00:04:51.744 TEST_HEADER include/spdk/dma.h 00:04:51.744 TEST_HEADER include/spdk/endian.h 00:04:51.744 TEST_HEADER include/spdk/env_dpdk.h 00:04:51.744 TEST_HEADER include/spdk/env.h 00:04:51.744 TEST_HEADER include/spdk/event.h 00:04:51.744 TEST_HEADER include/spdk/fd_group.h 00:04:51.744 TEST_HEADER include/spdk/fd.h 00:04:51.744 TEST_HEADER include/spdk/file.h 00:04:51.744 TEST_HEADER include/spdk/fsdev.h 00:04:51.744 TEST_HEADER include/spdk/fsdev_module.h 00:04:51.744 TEST_HEADER include/spdk/ftl.h 00:04:51.745 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:51.745 TEST_HEADER include/spdk/gpt_spec.h 00:04:51.745 CC examples/util/zipf/zipf.o 00:04:51.745 CC examples/ioat/perf/perf.o 00:04:51.745 TEST_HEADER include/spdk/hexlify.h 00:04:51.745 TEST_HEADER include/spdk/histogram_data.h 00:04:51.745 TEST_HEADER include/spdk/idxd.h 00:04:51.745 TEST_HEADER include/spdk/idxd_spec.h 00:04:51.745 TEST_HEADER include/spdk/init.h 00:04:51.745 TEST_HEADER include/spdk/ioat.h 00:04:51.745 CC test/thread/poller_perf/poller_perf.o 00:04:51.745 TEST_HEADER include/spdk/ioat_spec.h 00:04:52.003 TEST_HEADER include/spdk/iscsi_spec.h 00:04:52.003 TEST_HEADER include/spdk/json.h 00:04:52.003 TEST_HEADER include/spdk/jsonrpc.h 00:04:52.003 TEST_HEADER include/spdk/keyring.h 00:04:52.003 TEST_HEADER include/spdk/keyring_module.h 00:04:52.003 TEST_HEADER include/spdk/likely.h 00:04:52.003 TEST_HEADER include/spdk/log.h 00:04:52.003 TEST_HEADER include/spdk/lvol.h 00:04:52.003 TEST_HEADER include/spdk/md5.h 00:04:52.003 TEST_HEADER include/spdk/memory.h 00:04:52.003 CC test/dma/test_dma/test_dma.o 00:04:52.003 TEST_HEADER include/spdk/mmio.h 00:04:52.003 TEST_HEADER include/spdk/nbd.h 00:04:52.003 TEST_HEADER include/spdk/net.h 00:04:52.003 TEST_HEADER include/spdk/notify.h 00:04:52.003 TEST_HEADER include/spdk/nvme.h 00:04:52.003 TEST_HEADER include/spdk/nvme_intel.h 00:04:52.003 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:52.003 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:52.003 TEST_HEADER include/spdk/nvme_spec.h 00:04:52.003 TEST_HEADER include/spdk/nvme_zns.h 00:04:52.003 CC test/app/bdev_svc/bdev_svc.o 00:04:52.003 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:52.003 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:52.003 TEST_HEADER include/spdk/nvmf.h 00:04:52.003 TEST_HEADER include/spdk/nvmf_spec.h 00:04:52.003 TEST_HEADER include/spdk/nvmf_transport.h 00:04:52.004 TEST_HEADER include/spdk/opal.h 00:04:52.004 TEST_HEADER include/spdk/opal_spec.h 00:04:52.004 TEST_HEADER include/spdk/pci_ids.h 00:04:52.004 TEST_HEADER include/spdk/pipe.h 00:04:52.004 TEST_HEADER include/spdk/queue.h 00:04:52.004 CC test/env/mem_callbacks/mem_callbacks.o 00:04:52.004 TEST_HEADER include/spdk/reduce.h 00:04:52.004 TEST_HEADER include/spdk/rpc.h 00:04:52.004 TEST_HEADER include/spdk/scheduler.h 00:04:52.004 TEST_HEADER include/spdk/scsi.h 00:04:52.004 TEST_HEADER include/spdk/scsi_spec.h 00:04:52.004 TEST_HEADER include/spdk/sock.h 00:04:52.004 TEST_HEADER include/spdk/stdinc.h 00:04:52.004 TEST_HEADER include/spdk/string.h 00:04:52.004 TEST_HEADER include/spdk/thread.h 00:04:52.004 TEST_HEADER include/spdk/trace.h 00:04:52.004 TEST_HEADER include/spdk/trace_parser.h 00:04:52.004 TEST_HEADER include/spdk/tree.h 00:04:52.004 TEST_HEADER include/spdk/ublk.h 00:04:52.004 TEST_HEADER include/spdk/util.h 00:04:52.004 TEST_HEADER include/spdk/uuid.h 00:04:52.004 TEST_HEADER include/spdk/version.h 00:04:52.004 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:52.004 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:52.004 TEST_HEADER include/spdk/vhost.h 00:04:52.004 TEST_HEADER include/spdk/vmd.h 00:04:52.004 TEST_HEADER include/spdk/xor.h 00:04:52.004 TEST_HEADER include/spdk/zipf.h 00:04:52.004 CXX test/cpp_headers/accel.o 00:04:52.004 LINK rpc_client_test 00:04:52.004 LINK interrupt_tgt 00:04:52.004 LINK zipf 00:04:52.004 LINK poller_perf 00:04:52.004 LINK ioat_perf 00:04:52.004 LINK bdev_svc 00:04:52.262 CXX test/cpp_headers/accel_module.o 00:04:52.262 LINK spdk_trace 00:04:52.262 CXX test/cpp_headers/assert.o 00:04:52.262 CC examples/ioat/verify/verify.o 00:04:52.262 CC test/env/vtophys/vtophys.o 00:04:52.262 CC app/trace_record/trace_record.o 00:04:52.262 CXX test/cpp_headers/barrier.o 00:04:52.262 LINK vtophys 00:04:52.262 LINK mem_callbacks 00:04:52.262 CXX test/cpp_headers/base64.o 00:04:52.262 LINK test_dma 00:04:52.520 CC examples/sock/hello_world/hello_sock.o 00:04:52.520 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:52.520 LINK verify 00:04:52.520 CC examples/vmd/lsvmd/lsvmd.o 00:04:52.520 CC examples/thread/thread/thread_ex.o 00:04:52.520 LINK spdk_trace_record 00:04:52.520 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:52.520 CC examples/vmd/led/led.o 00:04:52.520 CXX test/cpp_headers/bdev.o 00:04:52.520 LINK lsvmd 00:04:52.777 LINK hello_sock 00:04:52.777 LINK led 00:04:52.777 LINK env_dpdk_post_init 00:04:52.777 CXX test/cpp_headers/bdev_module.o 00:04:52.778 CC app/nvmf_tgt/nvmf_main.o 00:04:52.778 CC test/env/memory/memory_ut.o 00:04:52.778 LINK thread 00:04:52.778 CC app/iscsi_tgt/iscsi_tgt.o 00:04:52.778 CC test/env/pci/pci_ut.o 00:04:52.778 CXX test/cpp_headers/bdev_zone.o 00:04:52.778 LINK nvmf_tgt 00:04:53.036 LINK iscsi_tgt 00:04:53.036 LINK nvme_fuzz 00:04:53.036 CC test/app/jsoncat/jsoncat.o 00:04:53.036 CC test/app/histogram_perf/histogram_perf.o 00:04:53.036 CC examples/idxd/perf/perf.o 00:04:53.036 CC examples/nvme/hello_world/hello_world.o 00:04:53.036 CXX test/cpp_headers/bit_array.o 00:04:53.036 CXX test/cpp_headers/bit_pool.o 00:04:53.036 LINK histogram_perf 00:04:53.036 LINK jsoncat 00:04:53.036 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:53.293 CXX test/cpp_headers/blob_bdev.o 00:04:53.293 LINK hello_world 00:04:53.293 LINK pci_ut 00:04:53.293 CXX test/cpp_headers/blobfs_bdev.o 00:04:53.293 CC app/spdk_tgt/spdk_tgt.o 00:04:53.293 CC test/app/stub/stub.o 00:04:53.293 LINK idxd_perf 00:04:53.293 CC examples/accel/perf/accel_perf.o 00:04:53.293 CXX test/cpp_headers/blobfs.o 00:04:53.551 CC examples/nvme/reconnect/reconnect.o 00:04:53.551 LINK spdk_tgt 00:04:53.551 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:53.551 LINK stub 00:04:53.551 CC examples/nvme/arbitration/arbitration.o 00:04:53.551 CC examples/nvme/hotplug/hotplug.o 00:04:53.551 CXX test/cpp_headers/blob.o 00:04:53.551 CXX test/cpp_headers/conf.o 00:04:53.809 CC app/spdk_lspci/spdk_lspci.o 00:04:53.809 CXX test/cpp_headers/config.o 00:04:53.809 LINK hotplug 00:04:53.809 LINK reconnect 00:04:53.809 CXX test/cpp_headers/cpuset.o 00:04:53.809 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:53.809 LINK accel_perf 00:04:53.809 LINK memory_ut 00:04:53.809 LINK arbitration 00:04:53.809 LINK spdk_lspci 00:04:53.809 CXX test/cpp_headers/crc16.o 00:04:54.068 LINK nvme_manage 00:04:54.068 CXX test/cpp_headers/crc32.o 00:04:54.068 LINK cmb_copy 00:04:54.068 CXX test/cpp_headers/crc64.o 00:04:54.068 CXX test/cpp_headers/dif.o 00:04:54.068 CC app/spdk_nvme_perf/perf.o 00:04:54.068 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:54.068 CC examples/blob/hello_world/hello_blob.o 00:04:54.068 CXX test/cpp_headers/dma.o 00:04:54.325 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:54.325 CC examples/nvme/abort/abort.o 00:04:54.325 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:54.325 CC examples/bdev/hello_world/hello_bdev.o 00:04:54.325 CXX test/cpp_headers/endian.o 00:04:54.325 CC test/event/event_perf/event_perf.o 00:04:54.325 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:54.325 LINK hello_blob 00:04:54.325 LINK pmr_persistence 00:04:54.325 LINK hello_fsdev 00:04:54.325 CXX test/cpp_headers/env_dpdk.o 00:04:54.582 LINK event_perf 00:04:54.582 LINK hello_bdev 00:04:54.582 LINK abort 00:04:54.582 CXX test/cpp_headers/env.o 00:04:54.582 CXX test/cpp_headers/event.o 00:04:54.582 CC app/spdk_nvme_identify/identify.o 00:04:54.582 CC examples/blob/cli/blobcli.o 00:04:54.582 CC test/event/reactor/reactor.o 00:04:54.840 LINK vhost_fuzz 00:04:54.840 CXX test/cpp_headers/fd_group.o 00:04:54.840 LINK reactor 00:04:54.840 CC examples/bdev/bdevperf/bdevperf.o 00:04:54.840 CC test/event/reactor_perf/reactor_perf.o 00:04:54.840 CC test/nvme/aer/aer.o 00:04:54.840 LINK iscsi_fuzz 00:04:54.840 CXX test/cpp_headers/fd.o 00:04:54.840 LINK spdk_nvme_perf 00:04:54.840 CC test/nvme/reset/reset.o 00:04:54.840 LINK reactor_perf 00:04:55.097 CC test/event/app_repeat/app_repeat.o 00:04:55.097 CXX test/cpp_headers/file.o 00:04:55.097 LINK aer 00:04:55.097 LINK blobcli 00:04:55.097 LINK app_repeat 00:04:55.097 CXX test/cpp_headers/fsdev.o 00:04:55.097 LINK reset 00:04:55.354 CC test/accel/dif/dif.o 00:04:55.354 CC test/blobfs/mkfs/mkfs.o 00:04:55.354 CXX test/cpp_headers/fsdev_module.o 00:04:55.354 CC test/lvol/esnap/esnap.o 00:04:55.354 CC app/spdk_nvme_discover/discovery_aer.o 00:04:55.354 CC app/spdk_top/spdk_top.o 00:04:55.354 CC test/event/scheduler/scheduler.o 00:04:55.354 LINK spdk_nvme_identify 00:04:55.354 CXX test/cpp_headers/ftl.o 00:04:55.354 LINK mkfs 00:04:55.354 CC test/nvme/sgl/sgl.o 00:04:55.612 LINK spdk_nvme_discover 00:04:55.612 LINK bdevperf 00:04:55.612 CXX test/cpp_headers/fuse_dispatcher.o 00:04:55.612 LINK scheduler 00:04:55.612 CC app/vhost/vhost.o 00:04:55.612 CC app/spdk_dd/spdk_dd.o 00:04:55.612 LINK sgl 00:04:55.612 CXX test/cpp_headers/gpt_spec.o 00:04:55.870 LINK vhost 00:04:55.870 CXX test/cpp_headers/hexlify.o 00:04:55.870 CC app/fio/nvme/fio_plugin.o 00:04:55.870 LINK dif 00:04:55.870 CC examples/nvmf/nvmf/nvmf.o 00:04:55.870 CC app/fio/bdev/fio_plugin.o 00:04:55.870 CC test/nvme/e2edp/nvme_dp.o 00:04:56.128 CXX test/cpp_headers/histogram_data.o 00:04:56.128 LINK spdk_dd 00:04:56.128 CC test/nvme/overhead/overhead.o 00:04:56.128 CXX test/cpp_headers/idxd.o 00:04:56.128 CC test/nvme/err_injection/err_injection.o 00:04:56.128 LINK nvme_dp 00:04:56.128 LINK nvmf 00:04:56.128 CXX test/cpp_headers/idxd_spec.o 00:04:56.386 LINK spdk_top 00:04:56.386 LINK err_injection 00:04:56.386 LINK overhead 00:04:56.386 CXX test/cpp_headers/init.o 00:04:56.386 CXX test/cpp_headers/ioat.o 00:04:56.386 LINK spdk_bdev 00:04:56.386 LINK spdk_nvme 00:04:56.386 CXX test/cpp_headers/ioat_spec.o 00:04:56.386 CC test/nvme/startup/startup.o 00:04:56.386 CC test/bdev/bdevio/bdevio.o 00:04:56.386 CXX test/cpp_headers/iscsi_spec.o 00:04:56.386 CXX test/cpp_headers/json.o 00:04:56.644 CC test/nvme/reserve/reserve.o 00:04:56.644 CXX test/cpp_headers/jsonrpc.o 00:04:56.644 CC test/nvme/simple_copy/simple_copy.o 00:04:56.644 CC test/nvme/connect_stress/connect_stress.o 00:04:56.644 CC test/nvme/boot_partition/boot_partition.o 00:04:56.644 LINK startup 00:04:56.644 CXX test/cpp_headers/keyring.o 00:04:56.644 CC test/nvme/compliance/nvme_compliance.o 00:04:56.644 CXX test/cpp_headers/keyring_module.o 00:04:56.644 LINK boot_partition 00:04:56.644 LINK connect_stress 00:04:56.901 LINK reserve 00:04:56.901 CC test/nvme/fused_ordering/fused_ordering.o 00:04:56.901 LINK simple_copy 00:04:56.901 LINK bdevio 00:04:56.901 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:56.901 CXX test/cpp_headers/likely.o 00:04:56.901 CXX test/cpp_headers/log.o 00:04:56.901 CC test/nvme/fdp/fdp.o 00:04:56.901 CXX test/cpp_headers/lvol.o 00:04:56.901 CC test/nvme/cuse/cuse.o 00:04:56.901 LINK fused_ordering 00:04:56.901 CXX test/cpp_headers/md5.o 00:04:57.159 CXX test/cpp_headers/memory.o 00:04:57.159 LINK doorbell_aers 00:04:57.159 LINK nvme_compliance 00:04:57.159 CXX test/cpp_headers/mmio.o 00:04:57.159 CXX test/cpp_headers/nbd.o 00:04:57.159 CXX test/cpp_headers/net.o 00:04:57.159 CXX test/cpp_headers/notify.o 00:04:57.159 CXX test/cpp_headers/nvme.o 00:04:57.159 CXX test/cpp_headers/nvme_intel.o 00:04:57.159 CXX test/cpp_headers/nvme_ocssd.o 00:04:57.159 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:57.159 CXX test/cpp_headers/nvme_spec.o 00:04:57.159 CXX test/cpp_headers/nvme_zns.o 00:04:57.418 CXX test/cpp_headers/nvmf_cmd.o 00:04:57.418 LINK fdp 00:04:57.418 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:57.418 CXX test/cpp_headers/nvmf.o 00:04:57.418 CXX test/cpp_headers/nvmf_spec.o 00:04:57.418 CXX test/cpp_headers/nvmf_transport.o 00:04:57.418 CXX test/cpp_headers/opal.o 00:04:57.418 CXX test/cpp_headers/opal_spec.o 00:04:57.418 CXX test/cpp_headers/pci_ids.o 00:04:57.418 CXX test/cpp_headers/pipe.o 00:04:57.418 CXX test/cpp_headers/queue.o 00:04:57.418 CXX test/cpp_headers/reduce.o 00:04:57.418 CXX test/cpp_headers/rpc.o 00:04:57.418 CXX test/cpp_headers/scheduler.o 00:04:57.418 CXX test/cpp_headers/scsi.o 00:04:57.418 CXX test/cpp_headers/scsi_spec.o 00:04:57.677 CXX test/cpp_headers/sock.o 00:04:57.677 CXX test/cpp_headers/stdinc.o 00:04:57.677 CXX test/cpp_headers/string.o 00:04:57.677 CXX test/cpp_headers/thread.o 00:04:57.677 CXX test/cpp_headers/trace.o 00:04:57.677 CXX test/cpp_headers/trace_parser.o 00:04:57.677 CXX test/cpp_headers/tree.o 00:04:57.677 CXX test/cpp_headers/ublk.o 00:04:57.677 CXX test/cpp_headers/util.o 00:04:57.677 CXX test/cpp_headers/uuid.o 00:04:57.677 CXX test/cpp_headers/version.o 00:04:57.677 CXX test/cpp_headers/vfio_user_pci.o 00:04:57.677 CXX test/cpp_headers/vfio_user_spec.o 00:04:57.677 CXX test/cpp_headers/vhost.o 00:04:57.677 CXX test/cpp_headers/vmd.o 00:04:57.937 CXX test/cpp_headers/xor.o 00:04:57.937 CXX test/cpp_headers/zipf.o 00:04:58.197 LINK cuse 00:05:00.737 LINK esnap 00:05:01.304 00:05:01.304 real 1m15.025s 00:05:01.304 user 5m35.171s 00:05:01.304 sys 1m10.661s 00:05:01.304 02:38:12 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:01.304 02:38:12 make -- common/autotest_common.sh@10 -- $ set +x 00:05:01.304 ************************************ 00:05:01.304 END TEST make 00:05:01.304 ************************************ 00:05:01.304 02:38:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:01.304 02:38:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:01.304 02:38:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:01.304 02:38:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.304 02:38:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:01.304 02:38:12 -- pm/common@44 -- $ pid=6206 00:05:01.304 02:38:12 -- pm/common@50 -- $ kill -TERM 6206 00:05:01.304 02:38:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.304 02:38:12 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:01.304 02:38:12 -- pm/common@44 -- $ pid=6208 00:05:01.304 02:38:12 -- pm/common@50 -- $ kill -TERM 6208 00:05:01.304 02:38:12 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:01.304 02:38:12 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:01.304 02:38:12 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:01.304 02:38:12 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:01.304 02:38:12 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.304 02:38:12 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.304 02:38:12 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.304 02:38:12 -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.304 02:38:12 -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.304 02:38:12 -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.304 02:38:12 -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.304 02:38:12 -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.304 02:38:12 -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.304 02:38:12 -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.304 02:38:12 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.304 02:38:12 -- scripts/common.sh@344 -- # case "$op" in 00:05:01.304 02:38:12 -- scripts/common.sh@345 -- # : 1 00:05:01.304 02:38:12 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.304 02:38:12 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.304 02:38:12 -- scripts/common.sh@365 -- # decimal 1 00:05:01.304 02:38:12 -- scripts/common.sh@353 -- # local d=1 00:05:01.304 02:38:12 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.304 02:38:12 -- scripts/common.sh@355 -- # echo 1 00:05:01.304 02:38:12 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.304 02:38:12 -- scripts/common.sh@366 -- # decimal 2 00:05:01.304 02:38:12 -- scripts/common.sh@353 -- # local d=2 00:05:01.304 02:38:12 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.304 02:38:12 -- scripts/common.sh@355 -- # echo 2 00:05:01.304 02:38:12 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.304 02:38:12 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.304 02:38:12 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.304 02:38:12 -- scripts/common.sh@368 -- # return 0 00:05:01.304 02:38:12 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.304 02:38:12 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:01.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.304 --rc genhtml_branch_coverage=1 00:05:01.304 --rc genhtml_function_coverage=1 00:05:01.304 --rc genhtml_legend=1 00:05:01.304 --rc geninfo_all_blocks=1 00:05:01.304 --rc geninfo_unexecuted_blocks=1 00:05:01.304 00:05:01.304 ' 00:05:01.304 02:38:12 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:01.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.304 --rc genhtml_branch_coverage=1 00:05:01.304 --rc genhtml_function_coverage=1 00:05:01.304 --rc genhtml_legend=1 00:05:01.304 --rc geninfo_all_blocks=1 00:05:01.304 --rc geninfo_unexecuted_blocks=1 00:05:01.304 00:05:01.304 ' 00:05:01.304 02:38:12 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:01.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.304 --rc genhtml_branch_coverage=1 00:05:01.304 --rc genhtml_function_coverage=1 00:05:01.304 --rc genhtml_legend=1 00:05:01.304 --rc geninfo_all_blocks=1 00:05:01.304 --rc geninfo_unexecuted_blocks=1 00:05:01.304 00:05:01.304 ' 00:05:01.304 02:38:12 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:01.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.304 --rc genhtml_branch_coverage=1 00:05:01.304 --rc genhtml_function_coverage=1 00:05:01.304 --rc genhtml_legend=1 00:05:01.304 --rc geninfo_all_blocks=1 00:05:01.304 --rc geninfo_unexecuted_blocks=1 00:05:01.304 00:05:01.304 ' 00:05:01.304 02:38:12 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:01.563 02:38:12 -- nvmf/common.sh@7 -- # uname -s 00:05:01.563 02:38:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.563 02:38:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.563 02:38:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.563 02:38:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.563 02:38:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.563 02:38:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.563 02:38:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.563 02:38:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.563 02:38:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.563 02:38:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.563 02:38:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1ee65646-a660-4775-adfc-b31218a3d881 00:05:01.563 02:38:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=1ee65646-a660-4775-adfc-b31218a3d881 00:05:01.563 02:38:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.563 02:38:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.563 02:38:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:01.563 02:38:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:01.563 02:38:12 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:01.563 02:38:12 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:01.563 02:38:12 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.563 02:38:12 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.563 02:38:12 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.563 02:38:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.563 02:38:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.563 02:38:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.563 02:38:12 -- paths/export.sh@5 -- # export PATH 00:05:01.563 02:38:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.563 02:38:12 -- nvmf/common.sh@51 -- # : 0 00:05:01.563 02:38:12 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:01.563 02:38:12 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:01.563 02:38:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:01.563 02:38:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.563 02:38:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.563 02:38:12 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:01.563 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:01.563 02:38:12 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:01.563 02:38:12 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:01.563 02:38:12 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:01.563 02:38:12 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:01.563 02:38:12 -- spdk/autotest.sh@32 -- # uname -s 00:05:01.563 02:38:12 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:01.563 02:38:12 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:01.563 02:38:12 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:01.563 02:38:12 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:01.563 02:38:12 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:01.563 02:38:12 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:01.563 02:38:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:01.563 02:38:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:01.563 02:38:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:01.564 02:38:12 -- spdk/autotest.sh@48 -- # udevadm_pid=66833 00:05:01.564 02:38:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:01.564 02:38:12 -- pm/common@17 -- # local monitor 00:05:01.564 02:38:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.564 02:38:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:01.564 02:38:12 -- pm/common@21 -- # date +%s 00:05:01.564 02:38:12 -- pm/common@25 -- # sleep 1 00:05:01.564 02:38:12 -- pm/common@21 -- # date +%s 00:05:01.564 02:38:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733539092 00:05:01.564 02:38:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733539092 00:05:01.564 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733539092_collect-vmstat.pm.log 00:05:01.564 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733539092_collect-cpu-load.pm.log 00:05:02.499 02:38:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:02.499 02:38:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:02.499 02:38:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.499 02:38:13 -- common/autotest_common.sh@10 -- # set +x 00:05:02.499 02:38:13 -- spdk/autotest.sh@59 -- # create_test_list 00:05:02.499 02:38:13 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:02.499 02:38:13 -- common/autotest_common.sh@10 -- # set +x 00:05:02.499 02:38:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:02.499 02:38:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:02.758 02:38:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:02.758 02:38:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:02.758 02:38:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:02.758 02:38:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:02.758 02:38:13 -- common/autotest_common.sh@1455 -- # uname 00:05:02.758 02:38:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:02.758 02:38:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:02.758 02:38:13 -- common/autotest_common.sh@1475 -- # uname 00:05:02.758 02:38:13 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:02.758 02:38:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:02.758 02:38:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:02.758 lcov: LCOV version 1.15 00:05:02.758 02:38:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:17.645 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:17.645 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:32.559 02:38:42 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:32.559 02:38:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:32.559 02:38:42 -- common/autotest_common.sh@10 -- # set +x 00:05:32.559 02:38:42 -- spdk/autotest.sh@78 -- # rm -f 00:05:32.559 02:38:42 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.559 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.559 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:32.559 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:32.559 02:38:43 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:32.559 02:38:43 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:32.559 02:38:43 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:32.559 02:38:43 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:32.559 02:38:43 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:32.559 02:38:43 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:32.559 02:38:43 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:32.559 02:38:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:32.559 02:38:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:32.559 02:38:43 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:32.559 02:38:43 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:32.559 02:38:43 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:32.559 02:38:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:32.559 02:38:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:32.559 02:38:43 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:32.559 02:38:43 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:32.559 02:38:43 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:32.559 02:38:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:32.559 02:38:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:32.559 02:38:43 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:32.559 02:38:43 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:32.559 02:38:43 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:32.559 02:38:43 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:32.559 02:38:43 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:32.559 02:38:43 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:32.559 02:38:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.559 02:38:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.559 02:38:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:32.559 02:38:43 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:32.559 02:38:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:32.559 No valid GPT data, bailing 00:05:32.559 02:38:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:32.559 02:38:43 -- scripts/common.sh@394 -- # pt= 00:05:32.559 02:38:43 -- scripts/common.sh@395 -- # return 1 00:05:32.559 02:38:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:32.559 1+0 records in 00:05:32.559 1+0 records out 00:05:32.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653274 s, 161 MB/s 00:05:32.559 02:38:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.559 02:38:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.559 02:38:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:32.559 02:38:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:32.559 02:38:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:32.819 No valid GPT data, bailing 00:05:32.819 02:38:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:32.819 02:38:43 -- scripts/common.sh@394 -- # pt= 00:05:32.819 02:38:43 -- scripts/common.sh@395 -- # return 1 00:05:32.819 02:38:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:32.819 1+0 records in 00:05:32.819 1+0 records out 00:05:32.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00649612 s, 161 MB/s 00:05:32.819 02:38:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.819 02:38:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.819 02:38:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:32.819 02:38:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:32.819 02:38:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:32.819 No valid GPT data, bailing 00:05:32.819 02:38:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:32.819 02:38:43 -- scripts/common.sh@394 -- # pt= 00:05:32.819 02:38:43 -- scripts/common.sh@395 -- # return 1 00:05:32.819 02:38:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:32.819 1+0 records in 00:05:32.819 1+0 records out 00:05:32.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00399949 s, 262 MB/s 00:05:32.819 02:38:43 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:32.819 02:38:43 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:32.819 02:38:43 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:32.819 02:38:43 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:32.819 02:38:43 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:32.819 No valid GPT data, bailing 00:05:32.819 02:38:43 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:32.819 02:38:43 -- scripts/common.sh@394 -- # pt= 00:05:32.819 02:38:43 -- scripts/common.sh@395 -- # return 1 00:05:32.819 02:38:43 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:32.819 1+0 records in 00:05:32.819 1+0 records out 00:05:32.819 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432971 s, 242 MB/s 00:05:32.819 02:38:43 -- spdk/autotest.sh@105 -- # sync 00:05:33.078 02:38:43 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:33.078 02:38:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:33.078 02:38:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:36.372 02:38:46 -- spdk/autotest.sh@111 -- # uname -s 00:05:36.373 02:38:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:36.373 02:38:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:36.373 02:38:46 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:36.632 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.632 Hugepages 00:05:36.632 node hugesize free / total 00:05:36.632 node0 1048576kB 0 / 0 00:05:36.632 node0 2048kB 0 / 0 00:05:36.632 00:05:36.632 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.632 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:36.898 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:36.898 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:36.898 02:38:47 -- spdk/autotest.sh@117 -- # uname -s 00:05:36.898 02:38:47 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:36.898 02:38:47 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:36.898 02:38:47 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.853 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.853 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.853 02:38:48 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:38.793 02:38:49 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:38.793 02:38:49 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:38.793 02:38:49 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:38.793 02:38:49 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:38.793 02:38:49 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:38.793 02:38:49 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:38.793 02:38:49 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.793 02:38:49 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:38.793 02:38:49 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:39.052 02:38:49 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:39.052 02:38:49 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:39.052 02:38:49 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.622 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.622 Waiting for block devices as requested 00:05:39.622 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:39.622 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:39.882 02:38:50 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:39.882 02:38:50 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:39.882 02:38:50 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:39.882 02:38:50 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:39.882 02:38:50 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:39.882 02:38:50 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:39.882 02:38:50 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:39.882 02:38:50 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:39.882 02:38:50 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:39.882 02:38:50 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:39.882 02:38:50 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:39.882 02:38:50 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:39.882 02:38:50 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:39.882 02:38:50 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:39.882 02:38:50 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:39.882 02:38:50 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:39.882 02:38:50 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:39.882 02:38:50 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:39.882 02:38:50 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:39.882 02:38:50 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:39.882 02:38:50 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:39.882 02:38:50 -- common/autotest_common.sh@1541 -- # continue 00:05:39.882 02:38:50 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:39.882 02:38:50 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:39.882 02:38:50 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:39.882 02:38:50 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:39.882 02:38:50 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:39.882 02:38:50 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:39.882 02:38:50 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:39.882 02:38:50 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:39.882 02:38:50 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:39.882 02:38:50 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:39.882 02:38:50 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:39.882 02:38:50 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:39.882 02:38:50 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:39.882 02:38:50 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:39.882 02:38:50 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:39.882 02:38:50 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:39.882 02:38:50 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:39.882 02:38:50 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:39.882 02:38:50 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:39.882 02:38:50 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:39.882 02:38:50 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:39.882 02:38:50 -- common/autotest_common.sh@1541 -- # continue 00:05:39.882 02:38:50 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:39.882 02:38:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:39.882 02:38:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.882 02:38:50 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:39.882 02:38:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:39.882 02:38:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.882 02:38:50 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.821 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.821 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.821 02:38:51 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:40.821 02:38:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:40.821 02:38:51 -- common/autotest_common.sh@10 -- # set +x 00:05:41.081 02:38:51 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:41.081 02:38:51 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:41.081 02:38:51 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:41.081 02:38:51 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:41.081 02:38:51 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:41.081 02:38:51 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:41.081 02:38:51 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:41.081 02:38:51 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:41.081 02:38:51 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:41.081 02:38:51 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:41.081 02:38:51 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:41.081 02:38:51 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:41.081 02:38:51 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:41.081 02:38:52 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:41.081 02:38:52 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:41.081 02:38:52 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:41.081 02:38:52 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:41.082 02:38:52 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:41.082 02:38:52 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:41.082 02:38:52 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:41.082 02:38:52 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:41.082 02:38:52 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:41.082 02:38:52 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:41.082 02:38:52 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:41.082 02:38:52 -- common/autotest_common.sh@1570 -- # return 0 00:05:41.082 02:38:52 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:41.082 02:38:52 -- common/autotest_common.sh@1578 -- # return 0 00:05:41.082 02:38:52 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:41.082 02:38:52 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:41.082 02:38:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:41.082 02:38:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:41.082 02:38:52 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:41.082 02:38:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.082 02:38:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.082 02:38:52 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:41.082 02:38:52 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:41.082 02:38:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.082 02:38:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.082 02:38:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.082 ************************************ 00:05:41.082 START TEST env 00:05:41.082 ************************************ 00:05:41.082 02:38:52 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:41.342 * Looking for test storage... 00:05:41.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:41.342 02:38:52 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:41.342 02:38:52 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:41.342 02:38:52 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.342 02:38:52 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.342 02:38:52 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.342 02:38:52 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.342 02:38:52 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.342 02:38:52 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.342 02:38:52 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.342 02:38:52 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.342 02:38:52 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.342 02:38:52 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.342 02:38:52 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.342 02:38:52 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.342 02:38:52 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.342 02:38:52 env -- scripts/common.sh@344 -- # case "$op" in 00:05:41.342 02:38:52 env -- scripts/common.sh@345 -- # : 1 00:05:41.342 02:38:52 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.342 02:38:52 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.342 02:38:52 env -- scripts/common.sh@365 -- # decimal 1 00:05:41.342 02:38:52 env -- scripts/common.sh@353 -- # local d=1 00:05:41.342 02:38:52 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.342 02:38:52 env -- scripts/common.sh@355 -- # echo 1 00:05:41.342 02:38:52 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.342 02:38:52 env -- scripts/common.sh@366 -- # decimal 2 00:05:41.342 02:38:52 env -- scripts/common.sh@353 -- # local d=2 00:05:41.342 02:38:52 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.342 02:38:52 env -- scripts/common.sh@355 -- # echo 2 00:05:41.342 02:38:52 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.342 02:38:52 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.342 02:38:52 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.342 02:38:52 env -- scripts/common.sh@368 -- # return 0 00:05:41.342 02:38:52 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.342 02:38:52 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.342 --rc genhtml_branch_coverage=1 00:05:41.342 --rc genhtml_function_coverage=1 00:05:41.342 --rc genhtml_legend=1 00:05:41.342 --rc geninfo_all_blocks=1 00:05:41.342 --rc geninfo_unexecuted_blocks=1 00:05:41.342 00:05:41.342 ' 00:05:41.342 02:38:52 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.342 --rc genhtml_branch_coverage=1 00:05:41.342 --rc genhtml_function_coverage=1 00:05:41.342 --rc genhtml_legend=1 00:05:41.342 --rc geninfo_all_blocks=1 00:05:41.342 --rc geninfo_unexecuted_blocks=1 00:05:41.342 00:05:41.342 ' 00:05:41.342 02:38:52 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.342 --rc genhtml_branch_coverage=1 00:05:41.342 --rc genhtml_function_coverage=1 00:05:41.342 --rc genhtml_legend=1 00:05:41.342 --rc geninfo_all_blocks=1 00:05:41.342 --rc geninfo_unexecuted_blocks=1 00:05:41.342 00:05:41.342 ' 00:05:41.342 02:38:52 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.342 --rc genhtml_branch_coverage=1 00:05:41.342 --rc genhtml_function_coverage=1 00:05:41.342 --rc genhtml_legend=1 00:05:41.342 --rc geninfo_all_blocks=1 00:05:41.342 --rc geninfo_unexecuted_blocks=1 00:05:41.342 00:05:41.342 ' 00:05:41.342 02:38:52 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:41.342 02:38:52 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.342 02:38:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.342 02:38:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.342 ************************************ 00:05:41.342 START TEST env_memory 00:05:41.342 ************************************ 00:05:41.342 02:38:52 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:41.342 00:05:41.342 00:05:41.342 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.342 http://cunit.sourceforge.net/ 00:05:41.342 00:05:41.342 00:05:41.342 Suite: memory 00:05:41.342 Test: alloc and free memory map ...[2024-12-07 02:38:52.360891] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:41.342 passed 00:05:41.342 Test: mem map translation ...[2024-12-07 02:38:52.401882] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:41.342 [2024-12-07 02:38:52.401922] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:41.342 [2024-12-07 02:38:52.401976] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:41.342 [2024-12-07 02:38:52.401994] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:41.603 passed 00:05:41.603 Test: mem map registration ...[2024-12-07 02:38:52.465580] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:41.603 [2024-12-07 02:38:52.465619] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:41.603 passed 00:05:41.603 Test: mem map adjacent registrations ...passed 00:05:41.603 00:05:41.603 Run Summary: Type Total Ran Passed Failed Inactive 00:05:41.603 suites 1 1 n/a 0 0 00:05:41.603 tests 4 4 4 0 0 00:05:41.603 asserts 152 152 152 0 n/a 00:05:41.603 00:05:41.603 Elapsed time = 0.225 seconds 00:05:41.603 00:05:41.603 real 0m0.275s 00:05:41.603 user 0m0.235s 00:05:41.603 sys 0m0.030s 00:05:41.603 ************************************ 00:05:41.603 END TEST env_memory 00:05:41.603 ************************************ 00:05:41.603 02:38:52 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.603 02:38:52 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:41.603 02:38:52 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:41.603 02:38:52 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.603 02:38:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.603 02:38:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:41.603 ************************************ 00:05:41.603 START TEST env_vtophys 00:05:41.603 ************************************ 00:05:41.603 02:38:52 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:41.603 EAL: lib.eal log level changed from notice to debug 00:05:41.603 EAL: Detected lcore 0 as core 0 on socket 0 00:05:41.603 EAL: Detected lcore 1 as core 0 on socket 0 00:05:41.603 EAL: Detected lcore 2 as core 0 on socket 0 00:05:41.603 EAL: Detected lcore 3 as core 0 on socket 0 00:05:41.603 EAL: Detected lcore 4 as core 0 on socket 0 00:05:41.603 EAL: Detected lcore 5 as core 0 on socket 0 00:05:41.603 EAL: Detected lcore 6 as core 0 on socket 0 00:05:41.603 EAL: Detected lcore 7 as core 0 on socket 0 00:05:41.603 EAL: Detected lcore 8 as core 0 on socket 0 00:05:41.603 EAL: Detected lcore 9 as core 0 on socket 0 00:05:41.863 EAL: Maximum logical cores by configuration: 128 00:05:41.863 EAL: Detected CPU lcores: 10 00:05:41.863 EAL: Detected NUMA nodes: 1 00:05:41.863 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:41.863 EAL: Detected shared linkage of DPDK 00:05:41.863 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:41.863 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:41.863 EAL: Registered [vdev] bus. 00:05:41.863 EAL: bus.vdev log level changed from disabled to notice 00:05:41.863 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:41.863 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:41.863 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:41.863 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:41.863 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:41.863 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:41.863 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:41.863 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:41.863 EAL: No shared files mode enabled, IPC will be disabled 00:05:41.863 EAL: No shared files mode enabled, IPC is disabled 00:05:41.863 EAL: Selected IOVA mode 'PA' 00:05:41.863 EAL: Probing VFIO support... 00:05:41.863 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:41.863 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:41.863 EAL: Ask a virtual area of 0x2e000 bytes 00:05:41.863 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:41.863 EAL: Setting up physically contiguous memory... 00:05:41.863 EAL: Setting maximum number of open files to 524288 00:05:41.863 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:41.863 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:41.863 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.863 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:41.863 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.863 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.863 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:41.864 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:41.864 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.864 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:41.864 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.864 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.864 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:41.864 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:41.864 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.864 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:41.864 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.864 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.864 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:41.864 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:41.864 EAL: Ask a virtual area of 0x61000 bytes 00:05:41.864 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:41.864 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:41.864 EAL: Ask a virtual area of 0x400000000 bytes 00:05:41.864 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:41.864 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:41.864 EAL: Hugepages will be freed exactly as allocated. 00:05:41.864 EAL: No shared files mode enabled, IPC is disabled 00:05:41.864 EAL: No shared files mode enabled, IPC is disabled 00:05:41.864 EAL: TSC frequency is ~2290000 KHz 00:05:41.864 EAL: Main lcore 0 is ready (tid=7f51937e6a40;cpuset=[0]) 00:05:41.864 EAL: Trying to obtain current memory policy. 00:05:41.864 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.864 EAL: Restoring previous memory policy: 0 00:05:41.864 EAL: request: mp_malloc_sync 00:05:41.864 EAL: No shared files mode enabled, IPC is disabled 00:05:41.864 EAL: Heap on socket 0 was expanded by 2MB 00:05:41.864 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:41.864 EAL: No shared files mode enabled, IPC is disabled 00:05:41.864 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:41.864 EAL: Mem event callback 'spdk:(nil)' registered 00:05:41.864 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:41.864 00:05:41.864 00:05:41.864 CUnit - A unit testing framework for C - Version 2.1-3 00:05:41.864 http://cunit.sourceforge.net/ 00:05:41.864 00:05:41.864 00:05:41.864 Suite: components_suite 00:05:42.123 Test: vtophys_malloc_test ...passed 00:05:42.123 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:42.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.123 EAL: Restoring previous memory policy: 4 00:05:42.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.123 EAL: request: mp_malloc_sync 00:05:42.123 EAL: No shared files mode enabled, IPC is disabled 00:05:42.123 EAL: Heap on socket 0 was expanded by 4MB 00:05:42.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.123 EAL: request: mp_malloc_sync 00:05:42.123 EAL: No shared files mode enabled, IPC is disabled 00:05:42.123 EAL: Heap on socket 0 was shrunk by 4MB 00:05:42.123 EAL: Trying to obtain current memory policy. 00:05:42.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.123 EAL: Restoring previous memory policy: 4 00:05:42.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.123 EAL: request: mp_malloc_sync 00:05:42.123 EAL: No shared files mode enabled, IPC is disabled 00:05:42.123 EAL: Heap on socket 0 was expanded by 6MB 00:05:42.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.123 EAL: request: mp_malloc_sync 00:05:42.123 EAL: No shared files mode enabled, IPC is disabled 00:05:42.123 EAL: Heap on socket 0 was shrunk by 6MB 00:05:42.123 EAL: Trying to obtain current memory policy. 00:05:42.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.123 EAL: Restoring previous memory policy: 4 00:05:42.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.123 EAL: request: mp_malloc_sync 00:05:42.123 EAL: No shared files mode enabled, IPC is disabled 00:05:42.123 EAL: Heap on socket 0 was expanded by 10MB 00:05:42.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.123 EAL: request: mp_malloc_sync 00:05:42.123 EAL: No shared files mode enabled, IPC is disabled 00:05:42.123 EAL: Heap on socket 0 was shrunk by 10MB 00:05:42.123 EAL: Trying to obtain current memory policy. 00:05:42.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.123 EAL: Restoring previous memory policy: 4 00:05:42.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.123 EAL: request: mp_malloc_sync 00:05:42.123 EAL: No shared files mode enabled, IPC is disabled 00:05:42.123 EAL: Heap on socket 0 was expanded by 18MB 00:05:42.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.123 EAL: request: mp_malloc_sync 00:05:42.123 EAL: No shared files mode enabled, IPC is disabled 00:05:42.123 EAL: Heap on socket 0 was shrunk by 18MB 00:05:42.123 EAL: Trying to obtain current memory policy. 00:05:42.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.123 EAL: Restoring previous memory policy: 4 00:05:42.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.123 EAL: request: mp_malloc_sync 00:05:42.123 EAL: No shared files mode enabled, IPC is disabled 00:05:42.123 EAL: Heap on socket 0 was expanded by 34MB 00:05:42.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.123 EAL: request: mp_malloc_sync 00:05:42.123 EAL: No shared files mode enabled, IPC is disabled 00:05:42.123 EAL: Heap on socket 0 was shrunk by 34MB 00:05:42.123 EAL: Trying to obtain current memory policy. 00:05:42.123 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.123 EAL: Restoring previous memory policy: 4 00:05:42.123 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.123 EAL: request: mp_malloc_sync 00:05:42.123 EAL: No shared files mode enabled, IPC is disabled 00:05:42.123 EAL: Heap on socket 0 was expanded by 66MB 00:05:42.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.383 EAL: request: mp_malloc_sync 00:05:42.383 EAL: No shared files mode enabled, IPC is disabled 00:05:42.383 EAL: Heap on socket 0 was shrunk by 66MB 00:05:42.383 EAL: Trying to obtain current memory policy. 00:05:42.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.383 EAL: Restoring previous memory policy: 4 00:05:42.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.383 EAL: request: mp_malloc_sync 00:05:42.383 EAL: No shared files mode enabled, IPC is disabled 00:05:42.383 EAL: Heap on socket 0 was expanded by 130MB 00:05:42.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.383 EAL: request: mp_malloc_sync 00:05:42.383 EAL: No shared files mode enabled, IPC is disabled 00:05:42.383 EAL: Heap on socket 0 was shrunk by 130MB 00:05:42.383 EAL: Trying to obtain current memory policy. 00:05:42.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.383 EAL: Restoring previous memory policy: 4 00:05:42.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.383 EAL: request: mp_malloc_sync 00:05:42.383 EAL: No shared files mode enabled, IPC is disabled 00:05:42.383 EAL: Heap on socket 0 was expanded by 258MB 00:05:42.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.383 EAL: request: mp_malloc_sync 00:05:42.383 EAL: No shared files mode enabled, IPC is disabled 00:05:42.383 EAL: Heap on socket 0 was shrunk by 258MB 00:05:42.383 EAL: Trying to obtain current memory policy. 00:05:42.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.644 EAL: Restoring previous memory policy: 4 00:05:42.644 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.644 EAL: request: mp_malloc_sync 00:05:42.644 EAL: No shared files mode enabled, IPC is disabled 00:05:42.644 EAL: Heap on socket 0 was expanded by 514MB 00:05:42.644 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.644 EAL: request: mp_malloc_sync 00:05:42.644 EAL: No shared files mode enabled, IPC is disabled 00:05:42.644 EAL: Heap on socket 0 was shrunk by 514MB 00:05:42.644 EAL: Trying to obtain current memory policy. 00:05:42.644 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:42.905 EAL: Restoring previous memory policy: 4 00:05:42.905 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.905 EAL: request: mp_malloc_sync 00:05:42.905 EAL: No shared files mode enabled, IPC is disabled 00:05:42.905 EAL: Heap on socket 0 was expanded by 1026MB 00:05:43.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.165 passed 00:05:43.165 00:05:43.165 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.165 suites 1 1 n/a 0 0 00:05:43.165 tests 2 2 2 0 0 00:05:43.165 asserts 5918 5918 5918 0 n/a 00:05:43.165 00:05:43.165 Elapsed time = 1.353 seconds 00:05:43.165 EAL: request: mp_malloc_sync 00:05:43.165 EAL: No shared files mode enabled, IPC is disabled 00:05:43.165 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:43.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.165 EAL: request: mp_malloc_sync 00:05:43.165 EAL: No shared files mode enabled, IPC is disabled 00:05:43.165 EAL: Heap on socket 0 was shrunk by 2MB 00:05:43.165 EAL: No shared files mode enabled, IPC is disabled 00:05:43.165 EAL: No shared files mode enabled, IPC is disabled 00:05:43.165 EAL: No shared files mode enabled, IPC is disabled 00:05:43.425 00:05:43.425 real 0m1.608s 00:05:43.425 user 0m0.767s 00:05:43.425 sys 0m0.704s 00:05:43.425 02:38:54 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.425 ************************************ 00:05:43.425 END TEST env_vtophys 00:05:43.425 ************************************ 00:05:43.425 02:38:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:43.425 02:38:54 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:43.425 02:38:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.425 02:38:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.425 02:38:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.425 ************************************ 00:05:43.425 START TEST env_pci 00:05:43.425 ************************************ 00:05:43.425 02:38:54 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:43.425 00:05:43.425 00:05:43.425 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.425 http://cunit.sourceforge.net/ 00:05:43.425 00:05:43.425 00:05:43.425 Suite: pci 00:05:43.425 Test: pci_hook ...[2024-12-07 02:38:54.348204] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69082 has claimed it 00:05:43.425 passed 00:05:43.425 00:05:43.425 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.425 suites 1 1 n/a 0 0 00:05:43.425 tests 1 1 1 0 0 00:05:43.425 asserts 25 25 25 0 n/a 00:05:43.425 00:05:43.425 Elapsed time = 0.007 seconds 00:05:43.425 EAL: Cannot find device (10000:00:01.0) 00:05:43.425 EAL: Failed to attach device on primary process 00:05:43.425 00:05:43.425 real 0m0.096s 00:05:43.425 user 0m0.036s 00:05:43.425 sys 0m0.059s 00:05:43.425 ************************************ 00:05:43.425 END TEST env_pci 00:05:43.425 ************************************ 00:05:43.425 02:38:54 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.425 02:38:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:43.425 02:38:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:43.425 02:38:54 env -- env/env.sh@15 -- # uname 00:05:43.425 02:38:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:43.425 02:38:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:43.425 02:38:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.425 02:38:54 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:43.425 02:38:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.425 02:38:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.425 ************************************ 00:05:43.425 START TEST env_dpdk_post_init 00:05:43.425 ************************************ 00:05:43.425 02:38:54 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:43.685 EAL: Detected CPU lcores: 10 00:05:43.685 EAL: Detected NUMA nodes: 1 00:05:43.685 EAL: Detected shared linkage of DPDK 00:05:43.685 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.685 EAL: Selected IOVA mode 'PA' 00:05:43.685 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.685 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:43.685 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:43.685 Starting DPDK initialization... 00:05:43.685 Starting SPDK post initialization... 00:05:43.685 SPDK NVMe probe 00:05:43.685 Attaching to 0000:00:10.0 00:05:43.685 Attaching to 0000:00:11.0 00:05:43.685 Attached to 0000:00:10.0 00:05:43.685 Attached to 0000:00:11.0 00:05:43.685 Cleaning up... 00:05:43.685 00:05:43.685 real 0m0.249s 00:05:43.685 user 0m0.061s 00:05:43.685 sys 0m0.089s 00:05:43.685 02:38:54 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.685 02:38:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.685 ************************************ 00:05:43.685 END TEST env_dpdk_post_init 00:05:43.685 ************************************ 00:05:43.946 02:38:54 env -- env/env.sh@26 -- # uname 00:05:43.946 02:38:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:43.946 02:38:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.946 02:38:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.946 02:38:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.946 02:38:54 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.946 ************************************ 00:05:43.946 START TEST env_mem_callbacks 00:05:43.946 ************************************ 00:05:43.946 02:38:54 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.946 EAL: Detected CPU lcores: 10 00:05:43.946 EAL: Detected NUMA nodes: 1 00:05:43.946 EAL: Detected shared linkage of DPDK 00:05:43.946 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.946 EAL: Selected IOVA mode 'PA' 00:05:43.946 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.946 00:05:43.946 00:05:43.946 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.946 http://cunit.sourceforge.net/ 00:05:43.946 00:05:43.946 00:05:43.946 Suite: memory 00:05:43.946 Test: test ... 00:05:43.946 register 0x200000200000 2097152 00:05:43.946 malloc 3145728 00:05:43.946 register 0x200000400000 4194304 00:05:43.946 buf 0x200000500000 len 3145728 PASSED 00:05:43.946 malloc 64 00:05:43.946 buf 0x2000004fff40 len 64 PASSED 00:05:43.946 malloc 4194304 00:05:43.946 register 0x200000800000 6291456 00:05:43.946 buf 0x200000a00000 len 4194304 PASSED 00:05:43.946 free 0x200000500000 3145728 00:05:43.946 free 0x2000004fff40 64 00:05:43.946 unregister 0x200000400000 4194304 PASSED 00:05:43.946 free 0x200000a00000 4194304 00:05:43.946 unregister 0x200000800000 6291456 PASSED 00:05:43.946 malloc 8388608 00:05:43.946 register 0x200000400000 10485760 00:05:43.946 buf 0x200000600000 len 8388608 PASSED 00:05:43.946 free 0x200000600000 8388608 00:05:43.946 unregister 0x200000400000 10485760 PASSED 00:05:43.946 passed 00:05:43.946 00:05:43.946 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.946 suites 1 1 n/a 0 0 00:05:43.946 tests 1 1 1 0 0 00:05:43.946 asserts 15 15 15 0 n/a 00:05:43.946 00:05:43.946 Elapsed time = 0.012 seconds 00:05:43.946 00:05:43.946 real 0m0.203s 00:05:43.946 user 0m0.045s 00:05:43.946 sys 0m0.056s 00:05:43.946 02:38:55 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.946 02:38:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:43.946 ************************************ 00:05:43.946 END TEST env_mem_callbacks 00:05:43.946 ************************************ 00:05:44.206 00:05:44.206 real 0m3.017s 00:05:44.206 user 0m1.372s 00:05:44.206 sys 0m1.304s 00:05:44.206 02:38:55 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.206 02:38:55 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.206 ************************************ 00:05:44.206 END TEST env 00:05:44.206 ************************************ 00:05:44.206 02:38:55 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:44.206 02:38:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.206 02:38:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.206 02:38:55 -- common/autotest_common.sh@10 -- # set +x 00:05:44.206 ************************************ 00:05:44.206 START TEST rpc 00:05:44.206 ************************************ 00:05:44.206 02:38:55 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:44.206 * Looking for test storage... 00:05:44.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:44.206 02:38:55 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:44.206 02:38:55 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:44.206 02:38:55 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:44.466 02:38:55 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:44.466 02:38:55 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.466 02:38:55 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.466 02:38:55 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.466 02:38:55 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.466 02:38:55 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.466 02:38:55 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.466 02:38:55 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.466 02:38:55 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.466 02:38:55 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.466 02:38:55 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.466 02:38:55 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.466 02:38:55 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:44.466 02:38:55 rpc -- scripts/common.sh@345 -- # : 1 00:05:44.466 02:38:55 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.466 02:38:55 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.466 02:38:55 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:44.466 02:38:55 rpc -- scripts/common.sh@353 -- # local d=1 00:05:44.466 02:38:55 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.466 02:38:55 rpc -- scripts/common.sh@355 -- # echo 1 00:05:44.466 02:38:55 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.466 02:38:55 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:44.466 02:38:55 rpc -- scripts/common.sh@353 -- # local d=2 00:05:44.466 02:38:55 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.466 02:38:55 rpc -- scripts/common.sh@355 -- # echo 2 00:05:44.466 02:38:55 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.466 02:38:55 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.466 02:38:55 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.466 02:38:55 rpc -- scripts/common.sh@368 -- # return 0 00:05:44.467 02:38:55 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.467 02:38:55 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:44.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.467 --rc genhtml_branch_coverage=1 00:05:44.467 --rc genhtml_function_coverage=1 00:05:44.467 --rc genhtml_legend=1 00:05:44.467 --rc geninfo_all_blocks=1 00:05:44.467 --rc geninfo_unexecuted_blocks=1 00:05:44.467 00:05:44.467 ' 00:05:44.467 02:38:55 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:44.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.467 --rc genhtml_branch_coverage=1 00:05:44.467 --rc genhtml_function_coverage=1 00:05:44.467 --rc genhtml_legend=1 00:05:44.467 --rc geninfo_all_blocks=1 00:05:44.467 --rc geninfo_unexecuted_blocks=1 00:05:44.467 00:05:44.467 ' 00:05:44.467 02:38:55 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:44.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.467 --rc genhtml_branch_coverage=1 00:05:44.467 --rc genhtml_function_coverage=1 00:05:44.467 --rc genhtml_legend=1 00:05:44.467 --rc geninfo_all_blocks=1 00:05:44.467 --rc geninfo_unexecuted_blocks=1 00:05:44.467 00:05:44.467 ' 00:05:44.467 02:38:55 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:44.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.467 --rc genhtml_branch_coverage=1 00:05:44.467 --rc genhtml_function_coverage=1 00:05:44.467 --rc genhtml_legend=1 00:05:44.467 --rc geninfo_all_blocks=1 00:05:44.467 --rc geninfo_unexecuted_blocks=1 00:05:44.467 00:05:44.467 ' 00:05:44.467 02:38:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69209 00:05:44.467 02:38:55 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:44.467 02:38:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.467 02:38:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69209 00:05:44.467 02:38:55 rpc -- common/autotest_common.sh@831 -- # '[' -z 69209 ']' 00:05:44.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.467 02:38:55 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.467 02:38:55 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.467 02:38:55 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.467 02:38:55 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.467 02:38:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.467 [2024-12-07 02:38:55.450811] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:44.467 [2024-12-07 02:38:55.451032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69209 ] 00:05:44.726 [2024-12-07 02:38:55.610705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.726 [2024-12-07 02:38:55.653778] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:44.726 [2024-12-07 02:38:55.653834] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69209' to capture a snapshot of events at runtime. 00:05:44.726 [2024-12-07 02:38:55.653849] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.726 [2024-12-07 02:38:55.653858] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.726 [2024-12-07 02:38:55.653870] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69209 for offline analysis/debug. 00:05:44.726 [2024-12-07 02:38:55.653918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.296 02:38:56 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.296 02:38:56 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:45.296 02:38:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.296 02:38:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.296 02:38:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:45.296 02:38:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:45.296 02:38:56 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.296 02:38:56 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.296 02:38:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.296 ************************************ 00:05:45.296 START TEST rpc_integrity 00:05:45.296 ************************************ 00:05:45.296 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:45.296 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.296 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.296 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.296 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.296 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.296 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.296 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.296 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.296 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.296 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.296 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.296 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:45.296 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.296 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.296 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.556 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.556 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.556 { 00:05:45.556 "name": "Malloc0", 00:05:45.556 "aliases": [ 00:05:45.556 "f4a7c851-9dce-4724-82f8-aba1750a6c86" 00:05:45.556 ], 00:05:45.556 "product_name": "Malloc disk", 00:05:45.556 "block_size": 512, 00:05:45.556 "num_blocks": 16384, 00:05:45.556 "uuid": "f4a7c851-9dce-4724-82f8-aba1750a6c86", 00:05:45.556 "assigned_rate_limits": { 00:05:45.556 "rw_ios_per_sec": 0, 00:05:45.556 "rw_mbytes_per_sec": 0, 00:05:45.556 "r_mbytes_per_sec": 0, 00:05:45.556 "w_mbytes_per_sec": 0 00:05:45.556 }, 00:05:45.556 "claimed": false, 00:05:45.556 "zoned": false, 00:05:45.556 "supported_io_types": { 00:05:45.556 "read": true, 00:05:45.556 "write": true, 00:05:45.556 "unmap": true, 00:05:45.556 "flush": true, 00:05:45.556 "reset": true, 00:05:45.556 "nvme_admin": false, 00:05:45.556 "nvme_io": false, 00:05:45.556 "nvme_io_md": false, 00:05:45.556 "write_zeroes": true, 00:05:45.556 "zcopy": true, 00:05:45.556 "get_zone_info": false, 00:05:45.556 "zone_management": false, 00:05:45.556 "zone_append": false, 00:05:45.556 "compare": false, 00:05:45.556 "compare_and_write": false, 00:05:45.556 "abort": true, 00:05:45.556 "seek_hole": false, 00:05:45.556 "seek_data": false, 00:05:45.556 "copy": true, 00:05:45.556 "nvme_iov_md": false 00:05:45.556 }, 00:05:45.556 "memory_domains": [ 00:05:45.556 { 00:05:45.556 "dma_device_id": "system", 00:05:45.556 "dma_device_type": 1 00:05:45.556 }, 00:05:45.556 { 00:05:45.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.556 "dma_device_type": 2 00:05:45.556 } 00:05:45.556 ], 00:05:45.556 "driver_specific": {} 00:05:45.556 } 00:05:45.556 ]' 00:05:45.556 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.556 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.556 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:45.556 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.556 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.556 [2024-12-07 02:38:56.441927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:45.556 [2024-12-07 02:38:56.441995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.556 [2024-12-07 02:38:56.442028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:45.556 [2024-12-07 02:38:56.442038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.556 [2024-12-07 02:38:56.444465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.556 [2024-12-07 02:38:56.444503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.556 Passthru0 00:05:45.556 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.556 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.556 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.557 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.557 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.557 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.557 { 00:05:45.557 "name": "Malloc0", 00:05:45.557 "aliases": [ 00:05:45.557 "f4a7c851-9dce-4724-82f8-aba1750a6c86" 00:05:45.557 ], 00:05:45.557 "product_name": "Malloc disk", 00:05:45.557 "block_size": 512, 00:05:45.557 "num_blocks": 16384, 00:05:45.557 "uuid": "f4a7c851-9dce-4724-82f8-aba1750a6c86", 00:05:45.557 "assigned_rate_limits": { 00:05:45.557 "rw_ios_per_sec": 0, 00:05:45.557 "rw_mbytes_per_sec": 0, 00:05:45.557 "r_mbytes_per_sec": 0, 00:05:45.557 "w_mbytes_per_sec": 0 00:05:45.557 }, 00:05:45.557 "claimed": true, 00:05:45.557 "claim_type": "exclusive_write", 00:05:45.557 "zoned": false, 00:05:45.557 "supported_io_types": { 00:05:45.557 "read": true, 00:05:45.557 "write": true, 00:05:45.557 "unmap": true, 00:05:45.557 "flush": true, 00:05:45.557 "reset": true, 00:05:45.557 "nvme_admin": false, 00:05:45.557 "nvme_io": false, 00:05:45.557 "nvme_io_md": false, 00:05:45.557 "write_zeroes": true, 00:05:45.557 "zcopy": true, 00:05:45.557 "get_zone_info": false, 00:05:45.557 "zone_management": false, 00:05:45.557 "zone_append": false, 00:05:45.557 "compare": false, 00:05:45.557 "compare_and_write": false, 00:05:45.557 "abort": true, 00:05:45.557 "seek_hole": false, 00:05:45.557 "seek_data": false, 00:05:45.557 "copy": true, 00:05:45.557 "nvme_iov_md": false 00:05:45.557 }, 00:05:45.557 "memory_domains": [ 00:05:45.557 { 00:05:45.557 "dma_device_id": "system", 00:05:45.557 "dma_device_type": 1 00:05:45.557 }, 00:05:45.557 { 00:05:45.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.557 "dma_device_type": 2 00:05:45.557 } 00:05:45.557 ], 00:05:45.557 "driver_specific": {} 00:05:45.557 }, 00:05:45.557 { 00:05:45.557 "name": "Passthru0", 00:05:45.557 "aliases": [ 00:05:45.557 "af0524f1-40d5-55c4-8909-408ab2ce0f43" 00:05:45.557 ], 00:05:45.557 "product_name": "passthru", 00:05:45.557 "block_size": 512, 00:05:45.557 "num_blocks": 16384, 00:05:45.557 "uuid": "af0524f1-40d5-55c4-8909-408ab2ce0f43", 00:05:45.557 "assigned_rate_limits": { 00:05:45.557 "rw_ios_per_sec": 0, 00:05:45.557 "rw_mbytes_per_sec": 0, 00:05:45.557 "r_mbytes_per_sec": 0, 00:05:45.557 "w_mbytes_per_sec": 0 00:05:45.557 }, 00:05:45.557 "claimed": false, 00:05:45.557 "zoned": false, 00:05:45.557 "supported_io_types": { 00:05:45.557 "read": true, 00:05:45.557 "write": true, 00:05:45.557 "unmap": true, 00:05:45.557 "flush": true, 00:05:45.557 "reset": true, 00:05:45.557 "nvme_admin": false, 00:05:45.557 "nvme_io": false, 00:05:45.557 "nvme_io_md": false, 00:05:45.557 "write_zeroes": true, 00:05:45.557 "zcopy": true, 00:05:45.557 "get_zone_info": false, 00:05:45.557 "zone_management": false, 00:05:45.557 "zone_append": false, 00:05:45.557 "compare": false, 00:05:45.557 "compare_and_write": false, 00:05:45.557 "abort": true, 00:05:45.557 "seek_hole": false, 00:05:45.557 "seek_data": false, 00:05:45.557 "copy": true, 00:05:45.557 "nvme_iov_md": false 00:05:45.557 }, 00:05:45.557 "memory_domains": [ 00:05:45.557 { 00:05:45.557 "dma_device_id": "system", 00:05:45.557 "dma_device_type": 1 00:05:45.557 }, 00:05:45.557 { 00:05:45.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.557 "dma_device_type": 2 00:05:45.557 } 00:05:45.557 ], 00:05:45.557 "driver_specific": { 00:05:45.557 "passthru": { 00:05:45.557 "name": "Passthru0", 00:05:45.557 "base_bdev_name": "Malloc0" 00:05:45.557 } 00:05:45.557 } 00:05:45.557 } 00:05:45.557 ]' 00:05:45.557 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.557 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.557 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.557 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.557 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.557 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.557 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:45.557 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.557 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.557 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.557 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.557 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.557 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.557 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.557 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.557 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:45.557 ************************************ 00:05:45.557 END TEST rpc_integrity 00:05:45.557 ************************************ 00:05:45.557 02:38:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.557 00:05:45.557 real 0m0.332s 00:05:45.557 user 0m0.194s 00:05:45.557 sys 0m0.060s 00:05:45.557 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.557 02:38:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.817 02:38:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:45.817 02:38:56 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.817 02:38:56 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.817 02:38:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.817 ************************************ 00:05:45.817 START TEST rpc_plugins 00:05:45.817 ************************************ 00:05:45.817 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:45.817 02:38:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:45.817 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.817 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.817 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.817 02:38:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:45.817 02:38:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:45.817 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.817 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.817 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.817 02:38:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:45.817 { 00:05:45.817 "name": "Malloc1", 00:05:45.817 "aliases": [ 00:05:45.817 "eb3414b7-2211-4331-b752-90f5c56b8959" 00:05:45.817 ], 00:05:45.817 "product_name": "Malloc disk", 00:05:45.817 "block_size": 4096, 00:05:45.817 "num_blocks": 256, 00:05:45.817 "uuid": "eb3414b7-2211-4331-b752-90f5c56b8959", 00:05:45.817 "assigned_rate_limits": { 00:05:45.817 "rw_ios_per_sec": 0, 00:05:45.817 "rw_mbytes_per_sec": 0, 00:05:45.818 "r_mbytes_per_sec": 0, 00:05:45.818 "w_mbytes_per_sec": 0 00:05:45.818 }, 00:05:45.818 "claimed": false, 00:05:45.818 "zoned": false, 00:05:45.818 "supported_io_types": { 00:05:45.818 "read": true, 00:05:45.818 "write": true, 00:05:45.818 "unmap": true, 00:05:45.818 "flush": true, 00:05:45.818 "reset": true, 00:05:45.818 "nvme_admin": false, 00:05:45.818 "nvme_io": false, 00:05:45.818 "nvme_io_md": false, 00:05:45.818 "write_zeroes": true, 00:05:45.818 "zcopy": true, 00:05:45.818 "get_zone_info": false, 00:05:45.818 "zone_management": false, 00:05:45.818 "zone_append": false, 00:05:45.818 "compare": false, 00:05:45.818 "compare_and_write": false, 00:05:45.818 "abort": true, 00:05:45.818 "seek_hole": false, 00:05:45.818 "seek_data": false, 00:05:45.818 "copy": true, 00:05:45.818 "nvme_iov_md": false 00:05:45.818 }, 00:05:45.818 "memory_domains": [ 00:05:45.818 { 00:05:45.818 "dma_device_id": "system", 00:05:45.818 "dma_device_type": 1 00:05:45.818 }, 00:05:45.818 { 00:05:45.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.818 "dma_device_type": 2 00:05:45.818 } 00:05:45.818 ], 00:05:45.818 "driver_specific": {} 00:05:45.818 } 00:05:45.818 ]' 00:05:45.818 02:38:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:45.818 02:38:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:45.818 02:38:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:45.818 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.818 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.818 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.818 02:38:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:45.818 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.818 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.818 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.818 02:38:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:45.818 02:38:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:45.818 ************************************ 00:05:45.818 END TEST rpc_plugins 00:05:45.818 ************************************ 00:05:45.818 02:38:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:45.818 00:05:45.818 real 0m0.163s 00:05:45.818 user 0m0.092s 00:05:45.818 sys 0m0.030s 00:05:45.818 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.818 02:38:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.818 02:38:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:45.818 02:38:56 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.818 02:38:56 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.818 02:38:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.078 ************************************ 00:05:46.078 START TEST rpc_trace_cmd_test 00:05:46.078 ************************************ 00:05:46.078 02:38:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:46.078 02:38:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:46.078 02:38:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:46.078 02:38:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.078 02:38:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.078 02:38:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.078 02:38:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:46.078 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69209", 00:05:46.078 "tpoint_group_mask": "0x8", 00:05:46.078 "iscsi_conn": { 00:05:46.078 "mask": "0x2", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "scsi": { 00:05:46.078 "mask": "0x4", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "bdev": { 00:05:46.078 "mask": "0x8", 00:05:46.078 "tpoint_mask": "0xffffffffffffffff" 00:05:46.078 }, 00:05:46.078 "nvmf_rdma": { 00:05:46.078 "mask": "0x10", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "nvmf_tcp": { 00:05:46.078 "mask": "0x20", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "ftl": { 00:05:46.078 "mask": "0x40", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "blobfs": { 00:05:46.078 "mask": "0x80", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "dsa": { 00:05:46.078 "mask": "0x200", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "thread": { 00:05:46.078 "mask": "0x400", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "nvme_pcie": { 00:05:46.078 "mask": "0x800", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "iaa": { 00:05:46.078 "mask": "0x1000", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "nvme_tcp": { 00:05:46.078 "mask": "0x2000", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "bdev_nvme": { 00:05:46.078 "mask": "0x4000", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "sock": { 00:05:46.078 "mask": "0x8000", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "blob": { 00:05:46.078 "mask": "0x10000", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 }, 00:05:46.078 "bdev_raid": { 00:05:46.078 "mask": "0x20000", 00:05:46.078 "tpoint_mask": "0x0" 00:05:46.078 } 00:05:46.078 }' 00:05:46.078 02:38:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:46.078 02:38:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:46.078 02:38:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:46.078 02:38:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:46.078 02:38:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:46.078 02:38:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:46.078 02:38:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:46.078 02:38:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:46.078 02:38:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:46.078 ************************************ 00:05:46.078 END TEST rpc_trace_cmd_test 00:05:46.078 ************************************ 00:05:46.078 02:38:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:46.078 00:05:46.078 real 0m0.244s 00:05:46.078 user 0m0.193s 00:05:46.078 sys 0m0.037s 00:05:46.078 02:38:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.078 02:38:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.338 02:38:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:46.338 02:38:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:46.338 02:38:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:46.338 02:38:57 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.338 02:38:57 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.338 02:38:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.338 ************************************ 00:05:46.338 START TEST rpc_daemon_integrity 00:05:46.338 ************************************ 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.338 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.338 { 00:05:46.338 "name": "Malloc2", 00:05:46.338 "aliases": [ 00:05:46.338 "c435e8d9-a37a-4386-b146-c077da40049f" 00:05:46.338 ], 00:05:46.338 "product_name": "Malloc disk", 00:05:46.338 "block_size": 512, 00:05:46.338 "num_blocks": 16384, 00:05:46.338 "uuid": "c435e8d9-a37a-4386-b146-c077da40049f", 00:05:46.338 "assigned_rate_limits": { 00:05:46.338 "rw_ios_per_sec": 0, 00:05:46.338 "rw_mbytes_per_sec": 0, 00:05:46.338 "r_mbytes_per_sec": 0, 00:05:46.338 "w_mbytes_per_sec": 0 00:05:46.338 }, 00:05:46.338 "claimed": false, 00:05:46.338 "zoned": false, 00:05:46.338 "supported_io_types": { 00:05:46.338 "read": true, 00:05:46.338 "write": true, 00:05:46.338 "unmap": true, 00:05:46.338 "flush": true, 00:05:46.338 "reset": true, 00:05:46.338 "nvme_admin": false, 00:05:46.338 "nvme_io": false, 00:05:46.338 "nvme_io_md": false, 00:05:46.338 "write_zeroes": true, 00:05:46.338 "zcopy": true, 00:05:46.338 "get_zone_info": false, 00:05:46.338 "zone_management": false, 00:05:46.338 "zone_append": false, 00:05:46.338 "compare": false, 00:05:46.338 "compare_and_write": false, 00:05:46.338 "abort": true, 00:05:46.338 "seek_hole": false, 00:05:46.338 "seek_data": false, 00:05:46.338 "copy": true, 00:05:46.338 "nvme_iov_md": false 00:05:46.338 }, 00:05:46.338 "memory_domains": [ 00:05:46.338 { 00:05:46.338 "dma_device_id": "system", 00:05:46.338 "dma_device_type": 1 00:05:46.338 }, 00:05:46.338 { 00:05:46.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.338 "dma_device_type": 2 00:05:46.338 } 00:05:46.338 ], 00:05:46.338 "driver_specific": {} 00:05:46.338 } 00:05:46.338 ]' 00:05:46.339 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.339 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.339 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:46.339 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.339 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.339 [2024-12-07 02:38:57.360694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:46.339 [2024-12-07 02:38:57.360746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.339 [2024-12-07 02:38:57.360770] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:46.339 [2024-12-07 02:38:57.360779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.339 [2024-12-07 02:38:57.362986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.339 [2024-12-07 02:38:57.363023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.339 Passthru0 00:05:46.339 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.339 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.339 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.339 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.339 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.339 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.339 { 00:05:46.339 "name": "Malloc2", 00:05:46.339 "aliases": [ 00:05:46.339 "c435e8d9-a37a-4386-b146-c077da40049f" 00:05:46.339 ], 00:05:46.339 "product_name": "Malloc disk", 00:05:46.339 "block_size": 512, 00:05:46.339 "num_blocks": 16384, 00:05:46.339 "uuid": "c435e8d9-a37a-4386-b146-c077da40049f", 00:05:46.339 "assigned_rate_limits": { 00:05:46.339 "rw_ios_per_sec": 0, 00:05:46.339 "rw_mbytes_per_sec": 0, 00:05:46.339 "r_mbytes_per_sec": 0, 00:05:46.339 "w_mbytes_per_sec": 0 00:05:46.339 }, 00:05:46.339 "claimed": true, 00:05:46.339 "claim_type": "exclusive_write", 00:05:46.339 "zoned": false, 00:05:46.339 "supported_io_types": { 00:05:46.339 "read": true, 00:05:46.339 "write": true, 00:05:46.339 "unmap": true, 00:05:46.339 "flush": true, 00:05:46.339 "reset": true, 00:05:46.339 "nvme_admin": false, 00:05:46.339 "nvme_io": false, 00:05:46.339 "nvme_io_md": false, 00:05:46.339 "write_zeroes": true, 00:05:46.339 "zcopy": true, 00:05:46.339 "get_zone_info": false, 00:05:46.339 "zone_management": false, 00:05:46.339 "zone_append": false, 00:05:46.339 "compare": false, 00:05:46.339 "compare_and_write": false, 00:05:46.339 "abort": true, 00:05:46.339 "seek_hole": false, 00:05:46.339 "seek_data": false, 00:05:46.339 "copy": true, 00:05:46.339 "nvme_iov_md": false 00:05:46.339 }, 00:05:46.339 "memory_domains": [ 00:05:46.339 { 00:05:46.339 "dma_device_id": "system", 00:05:46.339 "dma_device_type": 1 00:05:46.339 }, 00:05:46.339 { 00:05:46.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.339 "dma_device_type": 2 00:05:46.339 } 00:05:46.339 ], 00:05:46.339 "driver_specific": {} 00:05:46.339 }, 00:05:46.339 { 00:05:46.339 "name": "Passthru0", 00:05:46.339 "aliases": [ 00:05:46.339 "83e7cb01-55b4-562b-bb88-49c8cea0dcb6" 00:05:46.339 ], 00:05:46.339 "product_name": "passthru", 00:05:46.339 "block_size": 512, 00:05:46.339 "num_blocks": 16384, 00:05:46.339 "uuid": "83e7cb01-55b4-562b-bb88-49c8cea0dcb6", 00:05:46.339 "assigned_rate_limits": { 00:05:46.339 "rw_ios_per_sec": 0, 00:05:46.339 "rw_mbytes_per_sec": 0, 00:05:46.339 "r_mbytes_per_sec": 0, 00:05:46.339 "w_mbytes_per_sec": 0 00:05:46.339 }, 00:05:46.339 "claimed": false, 00:05:46.339 "zoned": false, 00:05:46.339 "supported_io_types": { 00:05:46.339 "read": true, 00:05:46.339 "write": true, 00:05:46.339 "unmap": true, 00:05:46.339 "flush": true, 00:05:46.339 "reset": true, 00:05:46.339 "nvme_admin": false, 00:05:46.339 "nvme_io": false, 00:05:46.339 "nvme_io_md": false, 00:05:46.339 "write_zeroes": true, 00:05:46.339 "zcopy": true, 00:05:46.339 "get_zone_info": false, 00:05:46.339 "zone_management": false, 00:05:46.339 "zone_append": false, 00:05:46.339 "compare": false, 00:05:46.339 "compare_and_write": false, 00:05:46.339 "abort": true, 00:05:46.339 "seek_hole": false, 00:05:46.339 "seek_data": false, 00:05:46.339 "copy": true, 00:05:46.339 "nvme_iov_md": false 00:05:46.339 }, 00:05:46.339 "memory_domains": [ 00:05:46.339 { 00:05:46.339 "dma_device_id": "system", 00:05:46.339 "dma_device_type": 1 00:05:46.339 }, 00:05:46.339 { 00:05:46.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.339 "dma_device_type": 2 00:05:46.339 } 00:05:46.339 ], 00:05:46.339 "driver_specific": { 00:05:46.339 "passthru": { 00:05:46.339 "name": "Passthru0", 00:05:46.339 "base_bdev_name": "Malloc2" 00:05:46.339 } 00:05:46.339 } 00:05:46.339 } 00:05:46.339 ]' 00:05:46.339 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.598 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.598 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.598 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.598 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.598 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.598 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:46.598 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.598 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.598 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.598 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.598 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.598 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.598 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.599 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.599 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.599 ************************************ 00:05:46.599 END TEST rpc_daemon_integrity 00:05:46.599 ************************************ 00:05:46.599 02:38:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.599 00:05:46.599 real 0m0.315s 00:05:46.599 user 0m0.194s 00:05:46.599 sys 0m0.048s 00:05:46.599 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.599 02:38:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.599 02:38:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:46.599 02:38:57 rpc -- rpc/rpc.sh@84 -- # killprocess 69209 00:05:46.599 02:38:57 rpc -- common/autotest_common.sh@950 -- # '[' -z 69209 ']' 00:05:46.599 02:38:57 rpc -- common/autotest_common.sh@954 -- # kill -0 69209 00:05:46.599 02:38:57 rpc -- common/autotest_common.sh@955 -- # uname 00:05:46.599 02:38:57 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.599 02:38:57 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69209 00:05:46.599 killing process with pid 69209 00:05:46.599 02:38:57 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.599 02:38:57 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.599 02:38:57 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69209' 00:05:46.599 02:38:57 rpc -- common/autotest_common.sh@969 -- # kill 69209 00:05:46.599 02:38:57 rpc -- common/autotest_common.sh@974 -- # wait 69209 00:05:47.168 00:05:47.168 real 0m2.872s 00:05:47.168 user 0m3.407s 00:05:47.168 sys 0m0.892s 00:05:47.168 ************************************ 00:05:47.168 END TEST rpc 00:05:47.168 ************************************ 00:05:47.168 02:38:58 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.168 02:38:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.168 02:38:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:47.168 02:38:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.168 02:38:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.168 02:38:58 -- common/autotest_common.sh@10 -- # set +x 00:05:47.168 ************************************ 00:05:47.168 START TEST skip_rpc 00:05:47.168 ************************************ 00:05:47.168 02:38:58 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:47.168 * Looking for test storage... 00:05:47.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.168 02:38:58 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:47.168 02:38:58 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:47.168 02:38:58 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:47.428 02:38:58 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.428 02:38:58 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:47.428 02:38:58 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.428 02:38:58 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:47.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.428 --rc genhtml_branch_coverage=1 00:05:47.428 --rc genhtml_function_coverage=1 00:05:47.428 --rc genhtml_legend=1 00:05:47.428 --rc geninfo_all_blocks=1 00:05:47.428 --rc geninfo_unexecuted_blocks=1 00:05:47.428 00:05:47.428 ' 00:05:47.428 02:38:58 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:47.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.428 --rc genhtml_branch_coverage=1 00:05:47.428 --rc genhtml_function_coverage=1 00:05:47.428 --rc genhtml_legend=1 00:05:47.428 --rc geninfo_all_blocks=1 00:05:47.428 --rc geninfo_unexecuted_blocks=1 00:05:47.428 00:05:47.428 ' 00:05:47.428 02:38:58 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:47.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.428 --rc genhtml_branch_coverage=1 00:05:47.428 --rc genhtml_function_coverage=1 00:05:47.428 --rc genhtml_legend=1 00:05:47.428 --rc geninfo_all_blocks=1 00:05:47.428 --rc geninfo_unexecuted_blocks=1 00:05:47.428 00:05:47.428 ' 00:05:47.428 02:38:58 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:47.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.428 --rc genhtml_branch_coverage=1 00:05:47.428 --rc genhtml_function_coverage=1 00:05:47.429 --rc genhtml_legend=1 00:05:47.429 --rc geninfo_all_blocks=1 00:05:47.429 --rc geninfo_unexecuted_blocks=1 00:05:47.429 00:05:47.429 ' 00:05:47.429 02:38:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:47.429 02:38:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:47.429 02:38:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:47.429 02:38:58 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.429 02:38:58 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.429 02:38:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.429 ************************************ 00:05:47.429 START TEST skip_rpc 00:05:47.429 ************************************ 00:05:47.429 02:38:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:47.429 02:38:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69416 00:05:47.429 02:38:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:47.429 02:38:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.429 02:38:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:47.429 [2024-12-07 02:38:58.414794] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:47.429 [2024-12-07 02:38:58.414998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69416 ] 00:05:47.689 [2024-12-07 02:38:58.573712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.689 [2024-12-07 02:38:58.625463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69416 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69416 ']' 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69416 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69416 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69416' 00:05:52.970 killing process with pid 69416 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69416 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69416 00:05:52.970 00:05:52.970 real 0m5.445s 00:05:52.970 ************************************ 00:05:52.970 END TEST skip_rpc 00:05:52.970 ************************************ 00:05:52.970 user 0m5.029s 00:05:52.970 sys 0m0.345s 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.970 02:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.970 02:39:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:52.970 02:39:03 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.970 02:39:03 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.970 02:39:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.970 ************************************ 00:05:52.970 START TEST skip_rpc_with_json 00:05:52.970 ************************************ 00:05:52.970 02:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:52.970 02:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:52.970 02:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69503 00:05:52.970 02:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.970 02:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.970 02:39:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69503 00:05:52.970 02:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69503 ']' 00:05:52.970 02:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.970 02:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.970 02:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.970 02:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.970 02:39:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.970 [2024-12-07 02:39:03.925001] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:05:52.970 [2024-12-07 02:39:03.925553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69503 ] 00:05:53.231 [2024-12-07 02:39:04.082673] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.231 [2024-12-07 02:39:04.127467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.800 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.800 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:53.800 02:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:53.800 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.800 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.800 [2024-12-07 02:39:04.729701] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:53.800 request: 00:05:53.800 { 00:05:53.800 "trtype": "tcp", 00:05:53.800 "method": "nvmf_get_transports", 00:05:53.800 "req_id": 1 00:05:53.800 } 00:05:53.800 Got JSON-RPC error response 00:05:53.800 response: 00:05:53.800 { 00:05:53.800 "code": -19, 00:05:53.800 "message": "No such device" 00:05:53.800 } 00:05:53.800 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:53.800 02:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:53.800 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.800 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.800 [2024-12-07 02:39:04.745805] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.800 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.800 02:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:53.800 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.800 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.060 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.060 02:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.060 { 00:05:54.060 "subsystems": [ 00:05:54.060 { 00:05:54.060 "subsystem": "fsdev", 00:05:54.060 "config": [ 00:05:54.060 { 00:05:54.060 "method": "fsdev_set_opts", 00:05:54.060 "params": { 00:05:54.060 "fsdev_io_pool_size": 65535, 00:05:54.060 "fsdev_io_cache_size": 256 00:05:54.060 } 00:05:54.060 } 00:05:54.060 ] 00:05:54.060 }, 00:05:54.060 { 00:05:54.060 "subsystem": "keyring", 00:05:54.060 "config": [] 00:05:54.060 }, 00:05:54.060 { 00:05:54.060 "subsystem": "iobuf", 00:05:54.060 "config": [ 00:05:54.060 { 00:05:54.060 "method": "iobuf_set_options", 00:05:54.060 "params": { 00:05:54.060 "small_pool_count": 8192, 00:05:54.060 "large_pool_count": 1024, 00:05:54.060 "small_bufsize": 8192, 00:05:54.060 "large_bufsize": 135168 00:05:54.060 } 00:05:54.060 } 00:05:54.060 ] 00:05:54.060 }, 00:05:54.060 { 00:05:54.060 "subsystem": "sock", 00:05:54.060 "config": [ 00:05:54.060 { 00:05:54.060 "method": "sock_set_default_impl", 00:05:54.060 "params": { 00:05:54.060 "impl_name": "posix" 00:05:54.060 } 00:05:54.060 }, 00:05:54.060 { 00:05:54.060 "method": "sock_impl_set_options", 00:05:54.060 "params": { 00:05:54.060 "impl_name": "ssl", 00:05:54.060 "recv_buf_size": 4096, 00:05:54.060 "send_buf_size": 4096, 00:05:54.060 "enable_recv_pipe": true, 00:05:54.060 "enable_quickack": false, 00:05:54.060 "enable_placement_id": 0, 00:05:54.060 "enable_zerocopy_send_server": true, 00:05:54.060 "enable_zerocopy_send_client": false, 00:05:54.060 "zerocopy_threshold": 0, 00:05:54.060 "tls_version": 0, 00:05:54.060 "enable_ktls": false 00:05:54.060 } 00:05:54.060 }, 00:05:54.060 { 00:05:54.060 "method": "sock_impl_set_options", 00:05:54.060 "params": { 00:05:54.060 "impl_name": "posix", 00:05:54.060 "recv_buf_size": 2097152, 00:05:54.060 "send_buf_size": 2097152, 00:05:54.060 "enable_recv_pipe": true, 00:05:54.060 "enable_quickack": false, 00:05:54.060 "enable_placement_id": 0, 00:05:54.060 "enable_zerocopy_send_server": true, 00:05:54.060 "enable_zerocopy_send_client": false, 00:05:54.060 "zerocopy_threshold": 0, 00:05:54.060 "tls_version": 0, 00:05:54.060 "enable_ktls": false 00:05:54.060 } 00:05:54.060 } 00:05:54.060 ] 00:05:54.060 }, 00:05:54.060 { 00:05:54.060 "subsystem": "vmd", 00:05:54.060 "config": [] 00:05:54.060 }, 00:05:54.060 { 00:05:54.060 "subsystem": "accel", 00:05:54.060 "config": [ 00:05:54.060 { 00:05:54.060 "method": "accel_set_options", 00:05:54.060 "params": { 00:05:54.060 "small_cache_size": 128, 00:05:54.060 "large_cache_size": 16, 00:05:54.060 "task_count": 2048, 00:05:54.060 "sequence_count": 2048, 00:05:54.060 "buf_count": 2048 00:05:54.060 } 00:05:54.060 } 00:05:54.061 ] 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "subsystem": "bdev", 00:05:54.061 "config": [ 00:05:54.061 { 00:05:54.061 "method": "bdev_set_options", 00:05:54.061 "params": { 00:05:54.061 "bdev_io_pool_size": 65535, 00:05:54.061 "bdev_io_cache_size": 256, 00:05:54.061 "bdev_auto_examine": true, 00:05:54.061 "iobuf_small_cache_size": 128, 00:05:54.061 "iobuf_large_cache_size": 16 00:05:54.061 } 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "method": "bdev_raid_set_options", 00:05:54.061 "params": { 00:05:54.061 "process_window_size_kb": 1024, 00:05:54.061 "process_max_bandwidth_mb_sec": 0 00:05:54.061 } 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "method": "bdev_iscsi_set_options", 00:05:54.061 "params": { 00:05:54.061 "timeout_sec": 30 00:05:54.061 } 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "method": "bdev_nvme_set_options", 00:05:54.061 "params": { 00:05:54.061 "action_on_timeout": "none", 00:05:54.061 "timeout_us": 0, 00:05:54.061 "timeout_admin_us": 0, 00:05:54.061 "keep_alive_timeout_ms": 10000, 00:05:54.061 "arbitration_burst": 0, 00:05:54.061 "low_priority_weight": 0, 00:05:54.061 "medium_priority_weight": 0, 00:05:54.061 "high_priority_weight": 0, 00:05:54.061 "nvme_adminq_poll_period_us": 10000, 00:05:54.061 "nvme_ioq_poll_period_us": 0, 00:05:54.061 "io_queue_requests": 0, 00:05:54.061 "delay_cmd_submit": true, 00:05:54.061 "transport_retry_count": 4, 00:05:54.061 "bdev_retry_count": 3, 00:05:54.061 "transport_ack_timeout": 0, 00:05:54.061 "ctrlr_loss_timeout_sec": 0, 00:05:54.061 "reconnect_delay_sec": 0, 00:05:54.061 "fast_io_fail_timeout_sec": 0, 00:05:54.061 "disable_auto_failback": false, 00:05:54.061 "generate_uuids": false, 00:05:54.061 "transport_tos": 0, 00:05:54.061 "nvme_error_stat": false, 00:05:54.061 "rdma_srq_size": 0, 00:05:54.061 "io_path_stat": false, 00:05:54.061 "allow_accel_sequence": false, 00:05:54.061 "rdma_max_cq_size": 0, 00:05:54.061 "rdma_cm_event_timeout_ms": 0, 00:05:54.061 "dhchap_digests": [ 00:05:54.061 "sha256", 00:05:54.061 "sha384", 00:05:54.061 "sha512" 00:05:54.061 ], 00:05:54.061 "dhchap_dhgroups": [ 00:05:54.061 "null", 00:05:54.061 "ffdhe2048", 00:05:54.061 "ffdhe3072", 00:05:54.061 "ffdhe4096", 00:05:54.061 "ffdhe6144", 00:05:54.061 "ffdhe8192" 00:05:54.061 ] 00:05:54.061 } 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "method": "bdev_nvme_set_hotplug", 00:05:54.061 "params": { 00:05:54.061 "period_us": 100000, 00:05:54.061 "enable": false 00:05:54.061 } 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "method": "bdev_wait_for_examine" 00:05:54.061 } 00:05:54.061 ] 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "subsystem": "scsi", 00:05:54.061 "config": null 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "subsystem": "scheduler", 00:05:54.061 "config": [ 00:05:54.061 { 00:05:54.061 "method": "framework_set_scheduler", 00:05:54.061 "params": { 00:05:54.061 "name": "static" 00:05:54.061 } 00:05:54.061 } 00:05:54.061 ] 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "subsystem": "vhost_scsi", 00:05:54.061 "config": [] 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "subsystem": "vhost_blk", 00:05:54.061 "config": [] 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "subsystem": "ublk", 00:05:54.061 "config": [] 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "subsystem": "nbd", 00:05:54.061 "config": [] 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "subsystem": "nvmf", 00:05:54.061 "config": [ 00:05:54.061 { 00:05:54.061 "method": "nvmf_set_config", 00:05:54.061 "params": { 00:05:54.061 "discovery_filter": "match_any", 00:05:54.061 "admin_cmd_passthru": { 00:05:54.061 "identify_ctrlr": false 00:05:54.061 }, 00:05:54.061 "dhchap_digests": [ 00:05:54.061 "sha256", 00:05:54.061 "sha384", 00:05:54.061 "sha512" 00:05:54.061 ], 00:05:54.061 "dhchap_dhgroups": [ 00:05:54.061 "null", 00:05:54.061 "ffdhe2048", 00:05:54.061 "ffdhe3072", 00:05:54.061 "ffdhe4096", 00:05:54.061 "ffdhe6144", 00:05:54.061 "ffdhe8192" 00:05:54.061 ] 00:05:54.061 } 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "method": "nvmf_set_max_subsystems", 00:05:54.061 "params": { 00:05:54.061 "max_subsystems": 1024 00:05:54.061 } 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "method": "nvmf_set_crdt", 00:05:54.061 "params": { 00:05:54.061 "crdt1": 0, 00:05:54.061 "crdt2": 0, 00:05:54.061 "crdt3": 0 00:05:54.061 } 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "method": "nvmf_create_transport", 00:05:54.061 "params": { 00:05:54.061 "trtype": "TCP", 00:05:54.061 "max_queue_depth": 128, 00:05:54.061 "max_io_qpairs_per_ctrlr": 127, 00:05:54.061 "in_capsule_data_size": 4096, 00:05:54.061 "max_io_size": 131072, 00:05:54.061 "io_unit_size": 131072, 00:05:54.061 "max_aq_depth": 128, 00:05:54.061 "num_shared_buffers": 511, 00:05:54.061 "buf_cache_size": 4294967295, 00:05:54.061 "dif_insert_or_strip": false, 00:05:54.061 "zcopy": false, 00:05:54.061 "c2h_success": true, 00:05:54.061 "sock_priority": 0, 00:05:54.061 "abort_timeout_sec": 1, 00:05:54.061 "ack_timeout": 0, 00:05:54.061 "data_wr_pool_size": 0 00:05:54.061 } 00:05:54.061 } 00:05:54.061 ] 00:05:54.061 }, 00:05:54.061 { 00:05:54.061 "subsystem": "iscsi", 00:05:54.061 "config": [ 00:05:54.061 { 00:05:54.061 "method": "iscsi_set_options", 00:05:54.061 "params": { 00:05:54.061 "node_base": "iqn.2016-06.io.spdk", 00:05:54.061 "max_sessions": 128, 00:05:54.061 "max_connections_per_session": 2, 00:05:54.061 "max_queue_depth": 64, 00:05:54.061 "default_time2wait": 2, 00:05:54.061 "default_time2retain": 20, 00:05:54.061 "first_burst_length": 8192, 00:05:54.061 "immediate_data": true, 00:05:54.061 "allow_duplicated_isid": false, 00:05:54.061 "error_recovery_level": 0, 00:05:54.061 "nop_timeout": 60, 00:05:54.061 "nop_in_interval": 30, 00:05:54.061 "disable_chap": false, 00:05:54.061 "require_chap": false, 00:05:54.061 "mutual_chap": false, 00:05:54.061 "chap_group": 0, 00:05:54.061 "max_large_datain_per_connection": 64, 00:05:54.061 "max_r2t_per_connection": 4, 00:05:54.061 "pdu_pool_size": 36864, 00:05:54.061 "immediate_data_pool_size": 16384, 00:05:54.061 "data_out_pool_size": 2048 00:05:54.061 } 00:05:54.061 } 00:05:54.061 ] 00:05:54.061 } 00:05:54.061 ] 00:05:54.061 } 00:05:54.061 02:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:54.061 02:39:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69503 00:05:54.061 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69503 ']' 00:05:54.061 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69503 00:05:54.061 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:54.061 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:54.061 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69503 00:05:54.061 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:54.061 killing process with pid 69503 00:05:54.061 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:54.061 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69503' 00:05:54.061 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69503 00:05:54.061 02:39:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69503 00:05:54.320 02:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69532 00:05:54.320 02:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.320 02:39:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:59.600 02:39:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69532 00:05:59.600 02:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69532 ']' 00:05:59.600 02:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69532 00:05:59.600 02:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:59.600 02:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.600 02:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69532 00:05:59.600 killing process with pid 69532 00:05:59.600 02:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.601 02:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.601 02:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69532' 00:05:59.601 02:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69532 00:05:59.601 02:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69532 00:05:59.860 02:39:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.860 02:39:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.860 ************************************ 00:05:59.860 END TEST skip_rpc_with_json 00:05:59.860 ************************************ 00:05:59.860 00:05:59.861 real 0m6.973s 00:05:59.861 user 0m6.487s 00:05:59.861 sys 0m0.762s 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.861 02:39:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:59.861 02:39:10 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.861 02:39:10 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.861 02:39:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.861 ************************************ 00:05:59.861 START TEST skip_rpc_with_delay 00:05:59.861 ************************************ 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:59.861 02:39:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.121 [2024-12-07 02:39:10.970694] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:00.121 [2024-12-07 02:39:10.970900] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:00.122 02:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:00.122 ************************************ 00:06:00.122 END TEST skip_rpc_with_delay 00:06:00.122 02:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:00.122 02:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:00.122 02:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:00.122 00:06:00.122 real 0m0.161s 00:06:00.122 user 0m0.087s 00:06:00.122 sys 0m0.071s 00:06:00.122 02:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:00.122 02:39:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:00.122 ************************************ 00:06:00.122 02:39:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:00.122 02:39:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:00.122 02:39:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:00.122 02:39:11 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.122 02:39:11 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.122 02:39:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.122 ************************************ 00:06:00.122 START TEST exit_on_failed_rpc_init 00:06:00.122 ************************************ 00:06:00.122 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:00.122 02:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69638 00:06:00.122 02:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.122 02:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69638 00:06:00.122 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69638 ']' 00:06:00.122 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.122 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.122 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.122 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.122 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.122 [2024-12-07 02:39:11.198166] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:00.122 [2024-12-07 02:39:11.198378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69638 ] 00:06:00.382 [2024-12-07 02:39:11.359262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.382 [2024-12-07 02:39:11.403473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.953 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.953 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:00.953 02:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.953 02:39:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.953 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:00.953 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:00.953 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.953 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.953 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.953 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.953 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.953 02:39:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.953 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.953 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:00.953 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.213 [2024-12-07 02:39:12.097003] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:01.213 [2024-12-07 02:39:12.097223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69656 ] 00:06:01.213 [2024-12-07 02:39:12.257347] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.473 [2024-12-07 02:39:12.304482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.473 [2024-12-07 02:39:12.304568] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:01.473 [2024-12-07 02:39:12.304592] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:01.473 [2024-12-07 02:39:12.304604] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.473 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:01.473 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.473 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:01.473 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:01.473 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:01.473 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.474 02:39:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:01.474 02:39:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69638 00:06:01.474 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69638 ']' 00:06:01.474 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69638 00:06:01.474 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:01.474 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.474 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69638 00:06:01.474 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.474 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.474 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69638' 00:06:01.474 killing process with pid 69638 00:06:01.474 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69638 00:06:01.474 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69638 00:06:02.045 00:06:02.045 real 0m1.750s 00:06:02.045 user 0m1.842s 00:06:02.045 sys 0m0.543s 00:06:02.045 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.045 02:39:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:02.045 ************************************ 00:06:02.045 END TEST exit_on_failed_rpc_init 00:06:02.045 ************************************ 00:06:02.045 02:39:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.045 00:06:02.045 real 0m14.839s 00:06:02.045 user 0m13.651s 00:06:02.045 sys 0m2.037s 00:06:02.045 02:39:12 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.045 02:39:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.045 ************************************ 00:06:02.045 END TEST skip_rpc 00:06:02.045 ************************************ 00:06:02.045 02:39:12 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:02.045 02:39:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.045 02:39:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.045 02:39:12 -- common/autotest_common.sh@10 -- # set +x 00:06:02.045 ************************************ 00:06:02.045 START TEST rpc_client 00:06:02.045 ************************************ 00:06:02.045 02:39:12 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:02.045 * Looking for test storage... 00:06:02.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:02.045 02:39:13 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.045 02:39:13 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.045 02:39:13 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:02.305 02:39:13 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.305 02:39:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:02.305 02:39:13 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.305 02:39:13 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:02.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.305 --rc genhtml_branch_coverage=1 00:06:02.305 --rc genhtml_function_coverage=1 00:06:02.305 --rc genhtml_legend=1 00:06:02.305 --rc geninfo_all_blocks=1 00:06:02.305 --rc geninfo_unexecuted_blocks=1 00:06:02.305 00:06:02.305 ' 00:06:02.305 02:39:13 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:02.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.305 --rc genhtml_branch_coverage=1 00:06:02.305 --rc genhtml_function_coverage=1 00:06:02.305 --rc genhtml_legend=1 00:06:02.305 --rc geninfo_all_blocks=1 00:06:02.305 --rc geninfo_unexecuted_blocks=1 00:06:02.305 00:06:02.305 ' 00:06:02.305 02:39:13 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:02.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.305 --rc genhtml_branch_coverage=1 00:06:02.305 --rc genhtml_function_coverage=1 00:06:02.305 --rc genhtml_legend=1 00:06:02.305 --rc geninfo_all_blocks=1 00:06:02.305 --rc geninfo_unexecuted_blocks=1 00:06:02.305 00:06:02.305 ' 00:06:02.305 02:39:13 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:02.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.305 --rc genhtml_branch_coverage=1 00:06:02.305 --rc genhtml_function_coverage=1 00:06:02.305 --rc genhtml_legend=1 00:06:02.305 --rc geninfo_all_blocks=1 00:06:02.305 --rc geninfo_unexecuted_blocks=1 00:06:02.305 00:06:02.305 ' 00:06:02.305 02:39:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:02.305 OK 00:06:02.305 02:39:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:02.305 00:06:02.305 real 0m0.280s 00:06:02.305 user 0m0.149s 00:06:02.305 sys 0m0.146s 00:06:02.305 02:39:13 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.305 02:39:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:02.305 ************************************ 00:06:02.305 END TEST rpc_client 00:06:02.305 ************************************ 00:06:02.305 02:39:13 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:02.305 02:39:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.305 02:39:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.305 02:39:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.305 ************************************ 00:06:02.305 START TEST json_config 00:06:02.305 ************************************ 00:06:02.305 02:39:13 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:02.565 02:39:13 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.565 02:39:13 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.565 02:39:13 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:02.565 02:39:13 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:02.565 02:39:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.565 02:39:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.565 02:39:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.565 02:39:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.565 02:39:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.565 02:39:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.565 02:39:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.565 02:39:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.565 02:39:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.565 02:39:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.565 02:39:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.565 02:39:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:02.565 02:39:13 json_config -- scripts/common.sh@345 -- # : 1 00:06:02.565 02:39:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.565 02:39:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.565 02:39:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:02.565 02:39:13 json_config -- scripts/common.sh@353 -- # local d=1 00:06:02.565 02:39:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.565 02:39:13 json_config -- scripts/common.sh@355 -- # echo 1 00:06:02.565 02:39:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.566 02:39:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:02.566 02:39:13 json_config -- scripts/common.sh@353 -- # local d=2 00:06:02.566 02:39:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.566 02:39:13 json_config -- scripts/common.sh@355 -- # echo 2 00:06:02.566 02:39:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.566 02:39:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.566 02:39:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.566 02:39:13 json_config -- scripts/common.sh@368 -- # return 0 00:06:02.566 02:39:13 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.566 02:39:13 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:02.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.566 --rc genhtml_branch_coverage=1 00:06:02.566 --rc genhtml_function_coverage=1 00:06:02.566 --rc genhtml_legend=1 00:06:02.566 --rc geninfo_all_blocks=1 00:06:02.566 --rc geninfo_unexecuted_blocks=1 00:06:02.566 00:06:02.566 ' 00:06:02.566 02:39:13 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:02.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.566 --rc genhtml_branch_coverage=1 00:06:02.566 --rc genhtml_function_coverage=1 00:06:02.566 --rc genhtml_legend=1 00:06:02.566 --rc geninfo_all_blocks=1 00:06:02.566 --rc geninfo_unexecuted_blocks=1 00:06:02.566 00:06:02.566 ' 00:06:02.566 02:39:13 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:02.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.566 --rc genhtml_branch_coverage=1 00:06:02.566 --rc genhtml_function_coverage=1 00:06:02.566 --rc genhtml_legend=1 00:06:02.566 --rc geninfo_all_blocks=1 00:06:02.566 --rc geninfo_unexecuted_blocks=1 00:06:02.566 00:06:02.566 ' 00:06:02.566 02:39:13 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:02.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.566 --rc genhtml_branch_coverage=1 00:06:02.566 --rc genhtml_function_coverage=1 00:06:02.566 --rc genhtml_legend=1 00:06:02.566 --rc geninfo_all_blocks=1 00:06:02.566 --rc geninfo_unexecuted_blocks=1 00:06:02.566 00:06:02.566 ' 00:06:02.566 02:39:13 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1ee65646-a660-4775-adfc-b31218a3d881 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1ee65646-a660-4775-adfc-b31218a3d881 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:02.566 02:39:13 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.566 02:39:13 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.566 02:39:13 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.566 02:39:13 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.566 02:39:13 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.566 02:39:13 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.566 02:39:13 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.566 02:39:13 json_config -- paths/export.sh@5 -- # export PATH 00:06:02.566 02:39:13 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@51 -- # : 0 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.566 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.566 02:39:13 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.566 02:39:13 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:02.566 02:39:13 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:02.566 02:39:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:02.566 02:39:13 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:02.566 02:39:13 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:02.566 02:39:13 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:02.566 WARNING: No tests are enabled so not running JSON configuration tests 00:06:02.566 02:39:13 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:02.566 00:06:02.566 real 0m0.206s 00:06:02.566 user 0m0.131s 00:06:02.566 sys 0m0.080s 00:06:02.566 02:39:13 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.566 02:39:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.566 ************************************ 00:06:02.566 END TEST json_config 00:06:02.566 ************************************ 00:06:02.566 02:39:13 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:02.566 02:39:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.566 02:39:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.566 02:39:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.566 ************************************ 00:06:02.566 START TEST json_config_extra_key 00:06:02.566 ************************************ 00:06:02.566 02:39:13 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:02.827 02:39:13 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.827 02:39:13 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.827 02:39:13 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:02.827 02:39:13 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.827 02:39:13 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:02.827 02:39:13 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.827 02:39:13 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:02.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.827 --rc genhtml_branch_coverage=1 00:06:02.827 --rc genhtml_function_coverage=1 00:06:02.827 --rc genhtml_legend=1 00:06:02.827 --rc geninfo_all_blocks=1 00:06:02.827 --rc geninfo_unexecuted_blocks=1 00:06:02.827 00:06:02.827 ' 00:06:02.827 02:39:13 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:02.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.827 --rc genhtml_branch_coverage=1 00:06:02.827 --rc genhtml_function_coverage=1 00:06:02.827 --rc genhtml_legend=1 00:06:02.827 --rc geninfo_all_blocks=1 00:06:02.827 --rc geninfo_unexecuted_blocks=1 00:06:02.827 00:06:02.827 ' 00:06:02.827 02:39:13 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:02.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.827 --rc genhtml_branch_coverage=1 00:06:02.827 --rc genhtml_function_coverage=1 00:06:02.828 --rc genhtml_legend=1 00:06:02.828 --rc geninfo_all_blocks=1 00:06:02.828 --rc geninfo_unexecuted_blocks=1 00:06:02.828 00:06:02.828 ' 00:06:02.828 02:39:13 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:02.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.828 --rc genhtml_branch_coverage=1 00:06:02.828 --rc genhtml_function_coverage=1 00:06:02.828 --rc genhtml_legend=1 00:06:02.828 --rc geninfo_all_blocks=1 00:06:02.828 --rc geninfo_unexecuted_blocks=1 00:06:02.828 00:06:02.828 ' 00:06:02.828 02:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1ee65646-a660-4775-adfc-b31218a3d881 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1ee65646-a660-4775-adfc-b31218a3d881 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:02.828 02:39:13 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.828 02:39:13 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.828 02:39:13 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.828 02:39:13 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.828 02:39:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.828 02:39:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.828 02:39:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.828 02:39:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:02.828 02:39:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.828 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.828 02:39:13 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.828 02:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:02.828 02:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:02.828 02:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:02.828 02:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:02.828 02:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:02.828 02:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:02.828 02:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:02.828 02:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:02.828 02:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:02.828 02:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:02.828 02:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:02.828 INFO: launching applications... 00:06:02.828 02:39:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:02.828 02:39:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:02.828 02:39:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:02.828 02:39:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:02.828 02:39:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:02.828 02:39:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:02.828 02:39:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.828 02:39:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.828 02:39:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69844 00:06:02.828 02:39:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:02.828 Waiting for target to run... 00:06:02.828 02:39:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69844 /var/tmp/spdk_tgt.sock 00:06:02.828 02:39:13 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:02.828 02:39:13 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69844 ']' 00:06:02.828 02:39:13 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:02.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:02.828 02:39:13 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.828 02:39:13 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:02.828 02:39:13 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.828 02:39:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:03.088 [2024-12-07 02:39:13.911093] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:03.088 [2024-12-07 02:39:13.911220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69844 ] 00:06:03.348 [2024-12-07 02:39:14.296261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.348 [2024-12-07 02:39:14.325866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.919 00:06:03.919 INFO: shutting down applications... 00:06:03.920 02:39:14 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.920 02:39:14 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:03.920 02:39:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:03.920 02:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:03.920 02:39:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:03.920 02:39:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:03.920 02:39:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:03.920 02:39:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69844 ]] 00:06:03.920 02:39:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69844 00:06:03.920 02:39:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:03.920 02:39:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.920 02:39:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69844 00:06:03.920 02:39:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:04.187 02:39:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:04.187 02:39:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.187 02:39:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69844 00:06:04.187 02:39:15 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:04.187 02:39:15 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:04.187 02:39:15 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:04.187 02:39:15 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:04.187 SPDK target shutdown done 00:06:04.187 02:39:15 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:04.187 Success 00:06:04.187 00:06:04.187 real 0m1.618s 00:06:04.187 user 0m1.319s 00:06:04.187 sys 0m0.464s 00:06:04.187 02:39:15 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.187 02:39:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:04.187 ************************************ 00:06:04.187 END TEST json_config_extra_key 00:06:04.187 ************************************ 00:06:04.448 02:39:15 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.448 02:39:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.448 02:39:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.448 02:39:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.448 ************************************ 00:06:04.448 START TEST alias_rpc 00:06:04.448 ************************************ 00:06:04.448 02:39:15 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.448 * Looking for test storage... 00:06:04.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:04.448 02:39:15 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:04.448 02:39:15 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:04.448 02:39:15 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:04.448 02:39:15 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.448 02:39:15 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:04.448 02:39:15 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.448 02:39:15 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:04.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.448 --rc genhtml_branch_coverage=1 00:06:04.448 --rc genhtml_function_coverage=1 00:06:04.448 --rc genhtml_legend=1 00:06:04.448 --rc geninfo_all_blocks=1 00:06:04.448 --rc geninfo_unexecuted_blocks=1 00:06:04.448 00:06:04.448 ' 00:06:04.448 02:39:15 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:04.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.449 --rc genhtml_branch_coverage=1 00:06:04.449 --rc genhtml_function_coverage=1 00:06:04.449 --rc genhtml_legend=1 00:06:04.449 --rc geninfo_all_blocks=1 00:06:04.449 --rc geninfo_unexecuted_blocks=1 00:06:04.449 00:06:04.449 ' 00:06:04.449 02:39:15 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:04.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.449 --rc genhtml_branch_coverage=1 00:06:04.449 --rc genhtml_function_coverage=1 00:06:04.449 --rc genhtml_legend=1 00:06:04.449 --rc geninfo_all_blocks=1 00:06:04.449 --rc geninfo_unexecuted_blocks=1 00:06:04.449 00:06:04.449 ' 00:06:04.449 02:39:15 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:04.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.449 --rc genhtml_branch_coverage=1 00:06:04.449 --rc genhtml_function_coverage=1 00:06:04.449 --rc genhtml_legend=1 00:06:04.449 --rc geninfo_all_blocks=1 00:06:04.449 --rc geninfo_unexecuted_blocks=1 00:06:04.449 00:06:04.449 ' 00:06:04.449 02:39:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:04.449 02:39:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69923 00:06:04.449 02:39:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.449 02:39:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69923 00:06:04.449 02:39:15 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69923 ']' 00:06:04.708 02:39:15 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.708 02:39:15 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.708 02:39:15 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.708 02:39:15 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.708 02:39:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.708 [2024-12-07 02:39:15.618345] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:04.708 [2024-12-07 02:39:15.618472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69923 ] 00:06:04.708 [2024-12-07 02:39:15.779549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.968 [2024-12-07 02:39:15.824875] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.538 02:39:16 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.538 02:39:16 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:05.538 02:39:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:05.800 02:39:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69923 00:06:05.800 02:39:16 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69923 ']' 00:06:05.800 02:39:16 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69923 00:06:05.800 02:39:16 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:05.800 02:39:16 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.800 02:39:16 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69923 00:06:05.800 02:39:16 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.800 02:39:16 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.800 killing process with pid 69923 00:06:05.800 02:39:16 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69923' 00:06:05.800 02:39:16 alias_rpc -- common/autotest_common.sh@969 -- # kill 69923 00:06:05.800 02:39:16 alias_rpc -- common/autotest_common.sh@974 -- # wait 69923 00:06:06.064 ************************************ 00:06:06.064 END TEST alias_rpc 00:06:06.064 ************************************ 00:06:06.064 00:06:06.064 real 0m1.757s 00:06:06.064 user 0m1.731s 00:06:06.064 sys 0m0.519s 00:06:06.064 02:39:17 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.064 02:39:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.064 02:39:17 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:06.064 02:39:17 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:06.064 02:39:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.064 02:39:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.064 02:39:17 -- common/autotest_common.sh@10 -- # set +x 00:06:06.064 ************************************ 00:06:06.064 START TEST spdkcli_tcp 00:06:06.064 ************************************ 00:06:06.064 02:39:17 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:06.324 * Looking for test storage... 00:06:06.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.324 02:39:17 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.324 --rc genhtml_branch_coverage=1 00:06:06.324 --rc genhtml_function_coverage=1 00:06:06.324 --rc genhtml_legend=1 00:06:06.324 --rc geninfo_all_blocks=1 00:06:06.324 --rc geninfo_unexecuted_blocks=1 00:06:06.324 00:06:06.324 ' 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.324 --rc genhtml_branch_coverage=1 00:06:06.324 --rc genhtml_function_coverage=1 00:06:06.324 --rc genhtml_legend=1 00:06:06.324 --rc geninfo_all_blocks=1 00:06:06.324 --rc geninfo_unexecuted_blocks=1 00:06:06.324 00:06:06.324 ' 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.324 --rc genhtml_branch_coverage=1 00:06:06.324 --rc genhtml_function_coverage=1 00:06:06.324 --rc genhtml_legend=1 00:06:06.324 --rc geninfo_all_blocks=1 00:06:06.324 --rc geninfo_unexecuted_blocks=1 00:06:06.324 00:06:06.324 ' 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.324 --rc genhtml_branch_coverage=1 00:06:06.324 --rc genhtml_function_coverage=1 00:06:06.324 --rc genhtml_legend=1 00:06:06.324 --rc geninfo_all_blocks=1 00:06:06.324 --rc geninfo_unexecuted_blocks=1 00:06:06.324 00:06:06.324 ' 00:06:06.324 02:39:17 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:06.324 02:39:17 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:06.324 02:39:17 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:06.324 02:39:17 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:06.324 02:39:17 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:06.324 02:39:17 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:06.324 02:39:17 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.324 02:39:17 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69997 00:06:06.324 02:39:17 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:06.324 02:39:17 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69997 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69997 ']' 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.324 02:39:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.584 [2024-12-07 02:39:17.448314] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:06.584 [2024-12-07 02:39:17.448496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69997 ] 00:06:06.584 [2024-12-07 02:39:17.609744] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.584 [2024-12-07 02:39:17.653693] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.584 [2024-12-07 02:39:17.653810] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.524 02:39:18 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.524 02:39:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:07.524 02:39:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70014 00:06:07.524 02:39:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:07.524 02:39:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:07.524 [ 00:06:07.524 "bdev_malloc_delete", 00:06:07.524 "bdev_malloc_create", 00:06:07.524 "bdev_null_resize", 00:06:07.524 "bdev_null_delete", 00:06:07.524 "bdev_null_create", 00:06:07.524 "bdev_nvme_cuse_unregister", 00:06:07.524 "bdev_nvme_cuse_register", 00:06:07.524 "bdev_opal_new_user", 00:06:07.524 "bdev_opal_set_lock_state", 00:06:07.524 "bdev_opal_delete", 00:06:07.524 "bdev_opal_get_info", 00:06:07.524 "bdev_opal_create", 00:06:07.524 "bdev_nvme_opal_revert", 00:06:07.524 "bdev_nvme_opal_init", 00:06:07.524 "bdev_nvme_send_cmd", 00:06:07.524 "bdev_nvme_set_keys", 00:06:07.524 "bdev_nvme_get_path_iostat", 00:06:07.524 "bdev_nvme_get_mdns_discovery_info", 00:06:07.524 "bdev_nvme_stop_mdns_discovery", 00:06:07.524 "bdev_nvme_start_mdns_discovery", 00:06:07.524 "bdev_nvme_set_multipath_policy", 00:06:07.524 "bdev_nvme_set_preferred_path", 00:06:07.524 "bdev_nvme_get_io_paths", 00:06:07.524 "bdev_nvme_remove_error_injection", 00:06:07.524 "bdev_nvme_add_error_injection", 00:06:07.524 "bdev_nvme_get_discovery_info", 00:06:07.524 "bdev_nvme_stop_discovery", 00:06:07.524 "bdev_nvme_start_discovery", 00:06:07.524 "bdev_nvme_get_controller_health_info", 00:06:07.524 "bdev_nvme_disable_controller", 00:06:07.524 "bdev_nvme_enable_controller", 00:06:07.524 "bdev_nvme_reset_controller", 00:06:07.524 "bdev_nvme_get_transport_statistics", 00:06:07.524 "bdev_nvme_apply_firmware", 00:06:07.524 "bdev_nvme_detach_controller", 00:06:07.524 "bdev_nvme_get_controllers", 00:06:07.524 "bdev_nvme_attach_controller", 00:06:07.524 "bdev_nvme_set_hotplug", 00:06:07.524 "bdev_nvme_set_options", 00:06:07.524 "bdev_passthru_delete", 00:06:07.525 "bdev_passthru_create", 00:06:07.525 "bdev_lvol_set_parent_bdev", 00:06:07.525 "bdev_lvol_set_parent", 00:06:07.525 "bdev_lvol_check_shallow_copy", 00:06:07.525 "bdev_lvol_start_shallow_copy", 00:06:07.525 "bdev_lvol_grow_lvstore", 00:06:07.525 "bdev_lvol_get_lvols", 00:06:07.525 "bdev_lvol_get_lvstores", 00:06:07.525 "bdev_lvol_delete", 00:06:07.525 "bdev_lvol_set_read_only", 00:06:07.525 "bdev_lvol_resize", 00:06:07.525 "bdev_lvol_decouple_parent", 00:06:07.525 "bdev_lvol_inflate", 00:06:07.525 "bdev_lvol_rename", 00:06:07.525 "bdev_lvol_clone_bdev", 00:06:07.525 "bdev_lvol_clone", 00:06:07.525 "bdev_lvol_snapshot", 00:06:07.525 "bdev_lvol_create", 00:06:07.525 "bdev_lvol_delete_lvstore", 00:06:07.525 "bdev_lvol_rename_lvstore", 00:06:07.525 "bdev_lvol_create_lvstore", 00:06:07.525 "bdev_raid_set_options", 00:06:07.525 "bdev_raid_remove_base_bdev", 00:06:07.525 "bdev_raid_add_base_bdev", 00:06:07.525 "bdev_raid_delete", 00:06:07.525 "bdev_raid_create", 00:06:07.525 "bdev_raid_get_bdevs", 00:06:07.525 "bdev_error_inject_error", 00:06:07.525 "bdev_error_delete", 00:06:07.525 "bdev_error_create", 00:06:07.525 "bdev_split_delete", 00:06:07.525 "bdev_split_create", 00:06:07.525 "bdev_delay_delete", 00:06:07.525 "bdev_delay_create", 00:06:07.525 "bdev_delay_update_latency", 00:06:07.525 "bdev_zone_block_delete", 00:06:07.525 "bdev_zone_block_create", 00:06:07.525 "blobfs_create", 00:06:07.525 "blobfs_detect", 00:06:07.525 "blobfs_set_cache_size", 00:06:07.525 "bdev_aio_delete", 00:06:07.525 "bdev_aio_rescan", 00:06:07.525 "bdev_aio_create", 00:06:07.525 "bdev_ftl_set_property", 00:06:07.525 "bdev_ftl_get_properties", 00:06:07.525 "bdev_ftl_get_stats", 00:06:07.525 "bdev_ftl_unmap", 00:06:07.525 "bdev_ftl_unload", 00:06:07.525 "bdev_ftl_delete", 00:06:07.525 "bdev_ftl_load", 00:06:07.525 "bdev_ftl_create", 00:06:07.525 "bdev_virtio_attach_controller", 00:06:07.525 "bdev_virtio_scsi_get_devices", 00:06:07.525 "bdev_virtio_detach_controller", 00:06:07.525 "bdev_virtio_blk_set_hotplug", 00:06:07.525 "bdev_iscsi_delete", 00:06:07.525 "bdev_iscsi_create", 00:06:07.525 "bdev_iscsi_set_options", 00:06:07.525 "accel_error_inject_error", 00:06:07.525 "ioat_scan_accel_module", 00:06:07.525 "dsa_scan_accel_module", 00:06:07.525 "iaa_scan_accel_module", 00:06:07.525 "keyring_file_remove_key", 00:06:07.525 "keyring_file_add_key", 00:06:07.525 "keyring_linux_set_options", 00:06:07.525 "fsdev_aio_delete", 00:06:07.525 "fsdev_aio_create", 00:06:07.525 "iscsi_get_histogram", 00:06:07.525 "iscsi_enable_histogram", 00:06:07.525 "iscsi_set_options", 00:06:07.525 "iscsi_get_auth_groups", 00:06:07.525 "iscsi_auth_group_remove_secret", 00:06:07.525 "iscsi_auth_group_add_secret", 00:06:07.525 "iscsi_delete_auth_group", 00:06:07.525 "iscsi_create_auth_group", 00:06:07.525 "iscsi_set_discovery_auth", 00:06:07.525 "iscsi_get_options", 00:06:07.525 "iscsi_target_node_request_logout", 00:06:07.525 "iscsi_target_node_set_redirect", 00:06:07.525 "iscsi_target_node_set_auth", 00:06:07.525 "iscsi_target_node_add_lun", 00:06:07.525 "iscsi_get_stats", 00:06:07.525 "iscsi_get_connections", 00:06:07.525 "iscsi_portal_group_set_auth", 00:06:07.525 "iscsi_start_portal_group", 00:06:07.525 "iscsi_delete_portal_group", 00:06:07.525 "iscsi_create_portal_group", 00:06:07.525 "iscsi_get_portal_groups", 00:06:07.525 "iscsi_delete_target_node", 00:06:07.525 "iscsi_target_node_remove_pg_ig_maps", 00:06:07.525 "iscsi_target_node_add_pg_ig_maps", 00:06:07.525 "iscsi_create_target_node", 00:06:07.525 "iscsi_get_target_nodes", 00:06:07.525 "iscsi_delete_initiator_group", 00:06:07.525 "iscsi_initiator_group_remove_initiators", 00:06:07.525 "iscsi_initiator_group_add_initiators", 00:06:07.525 "iscsi_create_initiator_group", 00:06:07.525 "iscsi_get_initiator_groups", 00:06:07.525 "nvmf_set_crdt", 00:06:07.525 "nvmf_set_config", 00:06:07.525 "nvmf_set_max_subsystems", 00:06:07.525 "nvmf_stop_mdns_prr", 00:06:07.525 "nvmf_publish_mdns_prr", 00:06:07.525 "nvmf_subsystem_get_listeners", 00:06:07.525 "nvmf_subsystem_get_qpairs", 00:06:07.525 "nvmf_subsystem_get_controllers", 00:06:07.525 "nvmf_get_stats", 00:06:07.525 "nvmf_get_transports", 00:06:07.525 "nvmf_create_transport", 00:06:07.525 "nvmf_get_targets", 00:06:07.525 "nvmf_delete_target", 00:06:07.525 "nvmf_create_target", 00:06:07.525 "nvmf_subsystem_allow_any_host", 00:06:07.525 "nvmf_subsystem_set_keys", 00:06:07.525 "nvmf_subsystem_remove_host", 00:06:07.525 "nvmf_subsystem_add_host", 00:06:07.525 "nvmf_ns_remove_host", 00:06:07.525 "nvmf_ns_add_host", 00:06:07.525 "nvmf_subsystem_remove_ns", 00:06:07.525 "nvmf_subsystem_set_ns_ana_group", 00:06:07.525 "nvmf_subsystem_add_ns", 00:06:07.525 "nvmf_subsystem_listener_set_ana_state", 00:06:07.525 "nvmf_discovery_get_referrals", 00:06:07.525 "nvmf_discovery_remove_referral", 00:06:07.525 "nvmf_discovery_add_referral", 00:06:07.525 "nvmf_subsystem_remove_listener", 00:06:07.525 "nvmf_subsystem_add_listener", 00:06:07.525 "nvmf_delete_subsystem", 00:06:07.525 "nvmf_create_subsystem", 00:06:07.525 "nvmf_get_subsystems", 00:06:07.525 "env_dpdk_get_mem_stats", 00:06:07.525 "nbd_get_disks", 00:06:07.525 "nbd_stop_disk", 00:06:07.525 "nbd_start_disk", 00:06:07.525 "ublk_recover_disk", 00:06:07.525 "ublk_get_disks", 00:06:07.525 "ublk_stop_disk", 00:06:07.525 "ublk_start_disk", 00:06:07.525 "ublk_destroy_target", 00:06:07.525 "ublk_create_target", 00:06:07.525 "virtio_blk_create_transport", 00:06:07.525 "virtio_blk_get_transports", 00:06:07.525 "vhost_controller_set_coalescing", 00:06:07.525 "vhost_get_controllers", 00:06:07.525 "vhost_delete_controller", 00:06:07.525 "vhost_create_blk_controller", 00:06:07.525 "vhost_scsi_controller_remove_target", 00:06:07.525 "vhost_scsi_controller_add_target", 00:06:07.525 "vhost_start_scsi_controller", 00:06:07.525 "vhost_create_scsi_controller", 00:06:07.525 "thread_set_cpumask", 00:06:07.525 "scheduler_set_options", 00:06:07.525 "framework_get_governor", 00:06:07.525 "framework_get_scheduler", 00:06:07.525 "framework_set_scheduler", 00:06:07.525 "framework_get_reactors", 00:06:07.525 "thread_get_io_channels", 00:06:07.525 "thread_get_pollers", 00:06:07.525 "thread_get_stats", 00:06:07.525 "framework_monitor_context_switch", 00:06:07.525 "spdk_kill_instance", 00:06:07.525 "log_enable_timestamps", 00:06:07.525 "log_get_flags", 00:06:07.525 "log_clear_flag", 00:06:07.525 "log_set_flag", 00:06:07.525 "log_get_level", 00:06:07.525 "log_set_level", 00:06:07.525 "log_get_print_level", 00:06:07.525 "log_set_print_level", 00:06:07.525 "framework_enable_cpumask_locks", 00:06:07.525 "framework_disable_cpumask_locks", 00:06:07.525 "framework_wait_init", 00:06:07.525 "framework_start_init", 00:06:07.525 "scsi_get_devices", 00:06:07.525 "bdev_get_histogram", 00:06:07.525 "bdev_enable_histogram", 00:06:07.525 "bdev_set_qos_limit", 00:06:07.525 "bdev_set_qd_sampling_period", 00:06:07.525 "bdev_get_bdevs", 00:06:07.525 "bdev_reset_iostat", 00:06:07.525 "bdev_get_iostat", 00:06:07.525 "bdev_examine", 00:06:07.525 "bdev_wait_for_examine", 00:06:07.525 "bdev_set_options", 00:06:07.525 "accel_get_stats", 00:06:07.525 "accel_set_options", 00:06:07.525 "accel_set_driver", 00:06:07.525 "accel_crypto_key_destroy", 00:06:07.525 "accel_crypto_keys_get", 00:06:07.525 "accel_crypto_key_create", 00:06:07.525 "accel_assign_opc", 00:06:07.525 "accel_get_module_info", 00:06:07.525 "accel_get_opc_assignments", 00:06:07.525 "vmd_rescan", 00:06:07.525 "vmd_remove_device", 00:06:07.525 "vmd_enable", 00:06:07.525 "sock_get_default_impl", 00:06:07.525 "sock_set_default_impl", 00:06:07.525 "sock_impl_set_options", 00:06:07.525 "sock_impl_get_options", 00:06:07.525 "iobuf_get_stats", 00:06:07.525 "iobuf_set_options", 00:06:07.525 "keyring_get_keys", 00:06:07.525 "framework_get_pci_devices", 00:06:07.525 "framework_get_config", 00:06:07.525 "framework_get_subsystems", 00:06:07.525 "fsdev_set_opts", 00:06:07.525 "fsdev_get_opts", 00:06:07.525 "trace_get_info", 00:06:07.525 "trace_get_tpoint_group_mask", 00:06:07.525 "trace_disable_tpoint_group", 00:06:07.525 "trace_enable_tpoint_group", 00:06:07.525 "trace_clear_tpoint_mask", 00:06:07.525 "trace_set_tpoint_mask", 00:06:07.525 "notify_get_notifications", 00:06:07.525 "notify_get_types", 00:06:07.525 "spdk_get_version", 00:06:07.525 "rpc_get_methods" 00:06:07.525 ] 00:06:07.525 02:39:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:07.525 02:39:18 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.525 02:39:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.525 02:39:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:07.525 02:39:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69997 00:06:07.525 02:39:18 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69997 ']' 00:06:07.525 02:39:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69997 00:06:07.525 02:39:18 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:07.525 02:39:18 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.525 02:39:18 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69997 00:06:07.525 02:39:18 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.525 02:39:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.526 02:39:18 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69997' 00:06:07.526 killing process with pid 69997 00:06:07.526 02:39:18 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69997 00:06:07.526 02:39:18 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69997 00:06:08.096 00:06:08.096 real 0m1.853s 00:06:08.096 user 0m3.098s 00:06:08.096 sys 0m0.561s 00:06:08.096 02:39:18 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.096 02:39:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.096 ************************************ 00:06:08.096 END TEST spdkcli_tcp 00:06:08.096 ************************************ 00:06:08.096 02:39:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.096 02:39:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.096 02:39:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.096 02:39:19 -- common/autotest_common.sh@10 -- # set +x 00:06:08.096 ************************************ 00:06:08.096 START TEST dpdk_mem_utility 00:06:08.096 ************************************ 00:06:08.096 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:08.096 * Looking for test storage... 00:06:08.096 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:08.096 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.356 02:39:19 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:08.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.356 --rc genhtml_branch_coverage=1 00:06:08.356 --rc genhtml_function_coverage=1 00:06:08.356 --rc genhtml_legend=1 00:06:08.356 --rc geninfo_all_blocks=1 00:06:08.356 --rc geninfo_unexecuted_blocks=1 00:06:08.356 00:06:08.356 ' 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:08.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.356 --rc genhtml_branch_coverage=1 00:06:08.356 --rc genhtml_function_coverage=1 00:06:08.356 --rc genhtml_legend=1 00:06:08.356 --rc geninfo_all_blocks=1 00:06:08.356 --rc geninfo_unexecuted_blocks=1 00:06:08.356 00:06:08.356 ' 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:08.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.356 --rc genhtml_branch_coverage=1 00:06:08.356 --rc genhtml_function_coverage=1 00:06:08.356 --rc genhtml_legend=1 00:06:08.356 --rc geninfo_all_blocks=1 00:06:08.356 --rc geninfo_unexecuted_blocks=1 00:06:08.356 00:06:08.356 ' 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:08.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.356 --rc genhtml_branch_coverage=1 00:06:08.356 --rc genhtml_function_coverage=1 00:06:08.356 --rc genhtml_legend=1 00:06:08.356 --rc geninfo_all_blocks=1 00:06:08.356 --rc geninfo_unexecuted_blocks=1 00:06:08.356 00:06:08.356 ' 00:06:08.356 02:39:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:08.356 02:39:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70097 00:06:08.356 02:39:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.356 02:39:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70097 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70097 ']' 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.356 02:39:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.356 [2024-12-07 02:39:19.360365] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:08.356 [2024-12-07 02:39:19.360581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70097 ] 00:06:08.616 [2024-12-07 02:39:19.519164] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.616 [2024-12-07 02:39:19.565913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.186 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.186 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:09.186 02:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:09.186 02:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:09.186 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.186 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.186 { 00:06:09.186 "filename": "/tmp/spdk_mem_dump.txt" 00:06:09.186 } 00:06:09.186 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.186 02:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:09.186 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:09.186 1 heaps totaling size 860.000000 MiB 00:06:09.187 size: 860.000000 MiB heap id: 0 00:06:09.187 end heaps---------- 00:06:09.187 9 mempools totaling size 642.649841 MiB 00:06:09.187 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:09.187 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:09.187 size: 92.545471 MiB name: bdev_io_70097 00:06:09.187 size: 51.011292 MiB name: evtpool_70097 00:06:09.187 size: 50.003479 MiB name: msgpool_70097 00:06:09.187 size: 36.509338 MiB name: fsdev_io_70097 00:06:09.187 size: 21.763794 MiB name: PDU_Pool 00:06:09.187 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:09.187 size: 0.026123 MiB name: Session_Pool 00:06:09.187 end mempools------- 00:06:09.187 6 memzones totaling size 4.142822 MiB 00:06:09.187 size: 1.000366 MiB name: RG_ring_0_70097 00:06:09.187 size: 1.000366 MiB name: RG_ring_1_70097 00:06:09.187 size: 1.000366 MiB name: RG_ring_4_70097 00:06:09.187 size: 1.000366 MiB name: RG_ring_5_70097 00:06:09.187 size: 0.125366 MiB name: RG_ring_2_70097 00:06:09.187 size: 0.015991 MiB name: RG_ring_3_70097 00:06:09.187 end memzones------- 00:06:09.187 02:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:09.449 heap id: 0 total size: 860.000000 MiB number of busy elements: 316 number of free elements: 16 00:06:09.449 list of free elements. size: 13.934875 MiB 00:06:09.449 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:09.449 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:09.449 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:09.449 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:09.449 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:09.449 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:09.449 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:09.449 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:09.449 element at address: 0x200000200000 with size: 0.835022 MiB 00:06:09.449 element at address: 0x20001d800000 with size: 0.567139 MiB 00:06:09.449 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:09.449 element at address: 0x200003e00000 with size: 0.487183 MiB 00:06:09.449 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:09.449 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:09.449 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:09.449 element at address: 0x200003a00000 with size: 0.353210 MiB 00:06:09.449 list of standard malloc elements. size: 199.268433 MiB 00:06:09.449 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:09.449 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:09.449 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:09.449 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:09.449 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:09.449 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:09.449 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:09.449 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:09.449 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:09.449 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:09.449 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:09.449 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:09.449 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:09.449 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:09.449 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:09.449 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:09.449 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:09.449 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:09.449 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:09.449 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a5a6c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a5eb80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7cb80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7cc40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7cd00 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:09.450 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891300 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:09.450 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:09.451 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:09.451 list of memzone associated elements. size: 646.796692 MiB 00:06:09.451 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:09.451 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:09.451 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:09.451 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:09.451 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:09.451 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70097_0 00:06:09.451 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:09.451 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70097_0 00:06:09.451 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:09.452 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70097_0 00:06:09.452 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:09.452 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70097_0 00:06:09.452 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:09.452 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:09.452 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:09.452 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:09.452 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:09.452 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70097 00:06:09.452 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:09.452 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70097 00:06:09.452 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:09.452 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70097 00:06:09.452 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:09.452 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:09.452 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:09.452 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:09.452 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:09.452 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:09.452 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:09.452 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:09.452 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:09.452 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70097 00:06:09.452 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:09.452 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70097 00:06:09.452 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:09.452 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70097 00:06:09.452 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:09.452 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70097 00:06:09.452 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:09.452 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70097 00:06:09.452 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:09.452 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70097 00:06:09.452 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:09.452 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:09.452 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:09.452 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:09.452 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:09.452 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:09.452 element at address: 0x200003a5ec40 with size: 0.125488 MiB 00:06:09.452 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70097 00:06:09.452 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:09.452 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:09.452 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:09.452 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:09.452 element at address: 0x200003a5a980 with size: 0.016113 MiB 00:06:09.452 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70097 00:06:09.452 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:09.452 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:09.452 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:09.452 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70097 00:06:09.452 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:09.452 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70097 00:06:09.452 element at address: 0x200003a5a780 with size: 0.000305 MiB 00:06:09.452 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70097 00:06:09.452 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:09.452 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:09.452 02:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:09.452 02:39:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70097 00:06:09.452 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70097 ']' 00:06:09.452 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70097 00:06:09.452 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:09.452 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.452 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70097 00:06:09.452 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.452 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.452 killing process with pid 70097 00:06:09.452 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70097' 00:06:09.452 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70097 00:06:09.452 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70097 00:06:09.712 00:06:09.712 real 0m1.702s 00:06:09.712 user 0m1.625s 00:06:09.712 sys 0m0.537s 00:06:09.712 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.712 02:39:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:09.712 ************************************ 00:06:09.712 END TEST dpdk_mem_utility 00:06:09.712 ************************************ 00:06:09.973 02:39:20 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:09.973 02:39:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.973 02:39:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.973 02:39:20 -- common/autotest_common.sh@10 -- # set +x 00:06:09.973 ************************************ 00:06:09.973 START TEST event 00:06:09.973 ************************************ 00:06:09.973 02:39:20 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:09.973 * Looking for test storage... 00:06:09.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:09.973 02:39:20 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.973 02:39:20 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.973 02:39:20 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.973 02:39:21 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.973 02:39:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.973 02:39:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.973 02:39:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.973 02:39:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.973 02:39:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.973 02:39:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.973 02:39:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.973 02:39:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.973 02:39:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.973 02:39:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.973 02:39:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.973 02:39:21 event -- scripts/common.sh@344 -- # case "$op" in 00:06:09.973 02:39:21 event -- scripts/common.sh@345 -- # : 1 00:06:09.973 02:39:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.973 02:39:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.973 02:39:21 event -- scripts/common.sh@365 -- # decimal 1 00:06:09.973 02:39:21 event -- scripts/common.sh@353 -- # local d=1 00:06:09.973 02:39:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.973 02:39:21 event -- scripts/common.sh@355 -- # echo 1 00:06:09.973 02:39:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.973 02:39:21 event -- scripts/common.sh@366 -- # decimal 2 00:06:09.973 02:39:21 event -- scripts/common.sh@353 -- # local d=2 00:06:09.973 02:39:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.973 02:39:21 event -- scripts/common.sh@355 -- # echo 2 00:06:09.973 02:39:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.973 02:39:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.973 02:39:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.973 02:39:21 event -- scripts/common.sh@368 -- # return 0 00:06:09.973 02:39:21 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.973 02:39:21 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.973 --rc genhtml_branch_coverage=1 00:06:09.973 --rc genhtml_function_coverage=1 00:06:09.973 --rc genhtml_legend=1 00:06:09.973 --rc geninfo_all_blocks=1 00:06:09.973 --rc geninfo_unexecuted_blocks=1 00:06:09.973 00:06:09.973 ' 00:06:09.973 02:39:21 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.973 --rc genhtml_branch_coverage=1 00:06:09.973 --rc genhtml_function_coverage=1 00:06:09.973 --rc genhtml_legend=1 00:06:09.973 --rc geninfo_all_blocks=1 00:06:09.973 --rc geninfo_unexecuted_blocks=1 00:06:09.973 00:06:09.973 ' 00:06:09.973 02:39:21 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.973 --rc genhtml_branch_coverage=1 00:06:09.973 --rc genhtml_function_coverage=1 00:06:09.973 --rc genhtml_legend=1 00:06:09.973 --rc geninfo_all_blocks=1 00:06:09.973 --rc geninfo_unexecuted_blocks=1 00:06:09.973 00:06:09.973 ' 00:06:09.973 02:39:21 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.973 --rc genhtml_branch_coverage=1 00:06:09.973 --rc genhtml_function_coverage=1 00:06:09.973 --rc genhtml_legend=1 00:06:09.973 --rc geninfo_all_blocks=1 00:06:09.973 --rc geninfo_unexecuted_blocks=1 00:06:09.973 00:06:09.973 ' 00:06:09.973 02:39:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:09.973 02:39:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:09.973 02:39:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.973 02:39:21 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:09.973 02:39:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.973 02:39:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.234 ************************************ 00:06:10.234 START TEST event_perf 00:06:10.234 ************************************ 00:06:10.234 02:39:21 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.234 Running I/O for 1 seconds...[2024-12-07 02:39:21.098327] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:10.234 [2024-12-07 02:39:21.098491] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70183 ] 00:06:10.234 [2024-12-07 02:39:21.261072] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.496 [2024-12-07 02:39:21.316366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.496 [2024-12-07 02:39:21.316559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.496 [2024-12-07 02:39:21.316612] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.496 Running I/O for 1 seconds...[2024-12-07 02:39:21.316798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.434 00:06:11.434 lcore 0: 127807 00:06:11.434 lcore 1: 127810 00:06:11.434 lcore 2: 127806 00:06:11.434 lcore 3: 127806 00:06:11.434 done. 00:06:11.434 00:06:11.434 real 0m1.402s 00:06:11.434 ************************************ 00:06:11.434 END TEST event_perf 00:06:11.434 ************************************ 00:06:11.434 user 0m4.153s 00:06:11.434 sys 0m0.128s 00:06:11.434 02:39:22 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.434 02:39:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.694 02:39:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:11.694 02:39:22 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:11.694 02:39:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.694 02:39:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.694 ************************************ 00:06:11.694 START TEST event_reactor 00:06:11.694 ************************************ 00:06:11.694 02:39:22 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:11.694 [2024-12-07 02:39:22.576442] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:11.694 [2024-12-07 02:39:22.576698] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70217 ] 00:06:11.694 [2024-12-07 02:39:22.737356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.954 [2024-12-07 02:39:22.826640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.917 test_start 00:06:12.917 oneshot 00:06:12.917 tick 100 00:06:12.917 tick 100 00:06:12.917 tick 250 00:06:12.917 tick 100 00:06:12.917 tick 100 00:06:12.917 tick 100 00:06:12.917 tick 250 00:06:12.917 tick 500 00:06:12.917 tick 100 00:06:12.917 tick 100 00:06:12.917 tick 250 00:06:12.917 tick 100 00:06:12.917 tick 100 00:06:12.917 test_end 00:06:12.917 00:06:12.917 real 0m1.430s 00:06:12.917 user 0m1.190s 00:06:12.917 sys 0m0.131s 00:06:12.917 ************************************ 00:06:12.917 END TEST event_reactor 00:06:12.917 ************************************ 00:06:12.917 02:39:23 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.917 02:39:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:13.176 02:39:24 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:13.176 02:39:24 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:13.176 02:39:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.176 02:39:24 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.176 ************************************ 00:06:13.176 START TEST event_reactor_perf 00:06:13.177 ************************************ 00:06:13.177 02:39:24 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:13.177 [2024-12-07 02:39:24.081437] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:13.177 [2024-12-07 02:39:24.081684] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70259 ] 00:06:13.177 [2024-12-07 02:39:24.243736] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.435 [2024-12-07 02:39:24.333659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.371 test_start 00:06:14.371 test_end 00:06:14.371 Performance: 395244 events per second 00:06:14.371 ************************************ 00:06:14.371 END TEST event_reactor_perf 00:06:14.371 ************************************ 00:06:14.371 00:06:14.371 real 0m1.382s 00:06:14.371 user 0m1.153s 00:06:14.371 sys 0m0.120s 00:06:14.371 02:39:25 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.371 02:39:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.630 02:39:25 event -- event/event.sh@49 -- # uname -s 00:06:14.630 02:39:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:14.630 02:39:25 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:14.630 02:39:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.630 02:39:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.630 02:39:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.630 ************************************ 00:06:14.630 START TEST event_scheduler 00:06:14.630 ************************************ 00:06:14.630 02:39:25 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:14.630 * Looking for test storage... 00:06:14.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:14.630 02:39:25 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:14.630 02:39:25 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.630 02:39:25 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.890 02:39:25 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.890 02:39:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:14.890 02:39:25 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.890 02:39:25 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.890 --rc genhtml_branch_coverage=1 00:06:14.890 --rc genhtml_function_coverage=1 00:06:14.890 --rc genhtml_legend=1 00:06:14.890 --rc geninfo_all_blocks=1 00:06:14.890 --rc geninfo_unexecuted_blocks=1 00:06:14.890 00:06:14.890 ' 00:06:14.890 02:39:25 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.890 --rc genhtml_branch_coverage=1 00:06:14.890 --rc genhtml_function_coverage=1 00:06:14.891 --rc genhtml_legend=1 00:06:14.891 --rc geninfo_all_blocks=1 00:06:14.891 --rc geninfo_unexecuted_blocks=1 00:06:14.891 00:06:14.891 ' 00:06:14.891 02:39:25 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.891 --rc genhtml_branch_coverage=1 00:06:14.891 --rc genhtml_function_coverage=1 00:06:14.891 --rc genhtml_legend=1 00:06:14.891 --rc geninfo_all_blocks=1 00:06:14.891 --rc geninfo_unexecuted_blocks=1 00:06:14.891 00:06:14.891 ' 00:06:14.891 02:39:25 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.891 --rc genhtml_branch_coverage=1 00:06:14.891 --rc genhtml_function_coverage=1 00:06:14.891 --rc genhtml_legend=1 00:06:14.891 --rc geninfo_all_blocks=1 00:06:14.891 --rc geninfo_unexecuted_blocks=1 00:06:14.891 00:06:14.891 ' 00:06:14.891 02:39:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:14.891 02:39:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70330 00:06:14.891 02:39:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:14.891 02:39:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.891 02:39:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70330 00:06:14.891 02:39:25 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70330 ']' 00:06:14.891 02:39:25 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.891 02:39:25 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.891 02:39:25 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.891 02:39:25 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.891 02:39:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.891 [2024-12-07 02:39:25.806820] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:14.891 [2024-12-07 02:39:25.807053] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70330 ] 00:06:14.891 [2024-12-07 02:39:25.967035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.151 [2024-12-07 02:39:26.013741] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.151 [2024-12-07 02:39:26.013970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.151 [2024-12-07 02:39:26.013912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.151 [2024-12-07 02:39:26.014083] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.721 02:39:26 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.721 02:39:26 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:15.721 02:39:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:15.721 02:39:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.721 02:39:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.721 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.721 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.721 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.721 POWER: Cannot set governor of lcore 0 to performance 00:06:15.721 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.721 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.721 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:15.721 POWER: Cannot set governor of lcore 0 to userspace 00:06:15.721 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:15.721 POWER: Unable to set Power Management Environment for lcore 0 00:06:15.721 [2024-12-07 02:39:26.639039] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:15.721 [2024-12-07 02:39:26.639086] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:15.721 [2024-12-07 02:39:26.639146] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:15.721 [2024-12-07 02:39:26.639190] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:15.721 [2024-12-07 02:39:26.639200] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:15.721 [2024-12-07 02:39:26.639222] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:15.721 02:39:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.721 02:39:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:15.721 02:39:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.721 02:39:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.721 [2024-12-07 02:39:26.713611] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:15.721 02:39:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.721 02:39:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:15.721 02:39:26 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.721 02:39:26 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.721 02:39:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:15.721 ************************************ 00:06:15.721 START TEST scheduler_create_thread 00:06:15.721 ************************************ 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.721 2 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.721 3 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.721 4 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.721 5 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.721 6 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.721 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.980 7 00:06:15.980 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.980 02:39:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:15.980 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.980 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.980 8 00:06:15.980 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.980 02:39:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:15.980 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.980 02:39:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.244 9 00:06:16.244 02:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.244 02:39:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:16.244 02:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.244 02:39:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.628 10 00:06:17.628 02:39:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.628 02:39:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:17.628 02:39:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.628 02:39:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.569 02:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.569 02:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:18.569 02:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:18.569 02:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.569 02:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.139 02:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.139 02:39:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:19.139 02:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.139 02:39:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.709 02:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.709 02:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:19.709 02:39:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:19.709 02:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.709 02:39:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.278 ************************************ 00:06:20.278 END TEST scheduler_create_thread 00:06:20.278 ************************************ 00:06:20.278 02:39:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.278 00:06:20.278 real 0m4.453s 00:06:20.278 user 0m0.028s 00:06:20.278 sys 0m0.010s 00:06:20.278 02:39:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.278 02:39:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.278 02:39:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:20.278 02:39:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70330 00:06:20.278 02:39:31 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70330 ']' 00:06:20.278 02:39:31 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70330 00:06:20.278 02:39:31 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:20.278 02:39:31 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.279 02:39:31 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70330 00:06:20.279 killing process with pid 70330 00:06:20.279 02:39:31 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:20.279 02:39:31 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:20.279 02:39:31 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70330' 00:06:20.279 02:39:31 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70330 00:06:20.279 02:39:31 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70330 00:06:20.538 [2024-12-07 02:39:31.458756] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:20.799 00:06:20.799 real 0m6.254s 00:06:20.799 user 0m14.394s 00:06:20.799 sys 0m0.490s 00:06:20.799 02:39:31 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.799 02:39:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.799 ************************************ 00:06:20.799 END TEST event_scheduler 00:06:20.799 ************************************ 00:06:20.799 02:39:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:20.799 02:39:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:20.799 02:39:31 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.799 02:39:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.799 02:39:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.799 ************************************ 00:06:20.799 START TEST app_repeat 00:06:20.799 ************************************ 00:06:20.799 02:39:31 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70441 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.799 Process app_repeat pid: 70441 00:06:20.799 spdk_app_start Round 0 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70441' 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:20.799 02:39:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70441 /var/tmp/spdk-nbd.sock 00:06:20.799 02:39:31 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70441 ']' 00:06:20.799 02:39:31 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.799 02:39:31 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.799 02:39:31 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.799 02:39:31 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.799 02:39:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.060 [2024-12-07 02:39:31.887608] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:21.060 [2024-12-07 02:39:31.887740] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70441 ] 00:06:21.060 [2024-12-07 02:39:32.051229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.060 [2024-12-07 02:39:32.123593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.060 [2024-12-07 02:39:32.123729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.999 02:39:32 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.999 02:39:32 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:21.999 02:39:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:21.999 Malloc0 00:06:21.999 02:39:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.259 Malloc1 00:06:22.259 02:39:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.259 02:39:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.259 02:39:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.259 02:39:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.259 02:39:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.259 02:39:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.259 02:39:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.259 02:39:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.259 02:39:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.259 02:39:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.259 02:39:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.259 02:39:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.260 02:39:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.260 02:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.260 02:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.260 02:39:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.520 /dev/nbd0 00:06:22.520 02:39:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.520 02:39:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.520 02:39:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:22.520 02:39:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:22.520 02:39:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:22.520 02:39:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:22.520 02:39:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:22.520 02:39:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:22.520 02:39:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:22.520 02:39:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:22.521 02:39:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.521 1+0 records in 00:06:22.521 1+0 records out 00:06:22.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495076 s, 8.3 MB/s 00:06:22.521 02:39:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.521 02:39:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:22.521 02:39:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.521 02:39:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:22.521 02:39:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:22.521 02:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.521 02:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.521 02:39:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.780 /dev/nbd1 00:06:22.780 02:39:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.780 02:39:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.780 1+0 records in 00:06:22.780 1+0 records out 00:06:22.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177987 s, 23.0 MB/s 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:22.780 02:39:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:22.780 02:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.781 02:39:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.781 02:39:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.781 02:39:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.781 02:39:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.041 { 00:06:23.041 "nbd_device": "/dev/nbd0", 00:06:23.041 "bdev_name": "Malloc0" 00:06:23.041 }, 00:06:23.041 { 00:06:23.041 "nbd_device": "/dev/nbd1", 00:06:23.041 "bdev_name": "Malloc1" 00:06:23.041 } 00:06:23.041 ]' 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.041 { 00:06:23.041 "nbd_device": "/dev/nbd0", 00:06:23.041 "bdev_name": "Malloc0" 00:06:23.041 }, 00:06:23.041 { 00:06:23.041 "nbd_device": "/dev/nbd1", 00:06:23.041 "bdev_name": "Malloc1" 00:06:23.041 } 00:06:23.041 ]' 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.041 /dev/nbd1' 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.041 /dev/nbd1' 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.041 256+0 records in 00:06:23.041 256+0 records out 00:06:23.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128842 s, 81.4 MB/s 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.041 256+0 records in 00:06:23.041 256+0 records out 00:06:23.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227679 s, 46.1 MB/s 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.041 02:39:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.041 256+0 records in 00:06:23.041 256+0 records out 00:06:23.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229478 s, 45.7 MB/s 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.041 02:39:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.301 02:39:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.301 02:39:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.301 02:39:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.301 02:39:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.301 02:39:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.301 02:39:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.301 02:39:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.301 02:39:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.301 02:39:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.301 02:39:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.561 02:39:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.561 02:39:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.561 02:39:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.561 02:39:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.561 02:39:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.561 02:39:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.561 02:39:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.561 02:39:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.561 02:39:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.561 02:39:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.561 02:39:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.821 02:39:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.821 02:39:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.821 02:39:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.821 02:39:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.821 02:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.821 02:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.821 02:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.821 02:39:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.821 02:39:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.821 02:39:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.821 02:39:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.821 02:39:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.821 02:39:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.080 02:39:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.339 [2024-12-07 02:39:35.274815] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.339 [2024-12-07 02:39:35.339953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.339 [2024-12-07 02:39:35.339954] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.597 [2024-12-07 02:39:35.418604] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.597 [2024-12-07 02:39:35.418693] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.136 spdk_app_start Round 1 00:06:27.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.136 02:39:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.136 02:39:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:27.136 02:39:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70441 /var/tmp/spdk-nbd.sock 00:06:27.136 02:39:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70441 ']' 00:06:27.136 02:39:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.136 02:39:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.136 02:39:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.136 02:39:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.136 02:39:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.136 02:39:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.136 02:39:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:27.136 02:39:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.396 Malloc0 00:06:27.396 02:39:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.656 Malloc1 00:06:27.656 02:39:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.656 02:39:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.916 /dev/nbd0 00:06:27.916 02:39:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.916 02:39:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.916 1+0 records in 00:06:27.916 1+0 records out 00:06:27.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465693 s, 8.8 MB/s 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:27.916 02:39:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:27.916 02:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.916 02:39:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.916 02:39:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.176 /dev/nbd1 00:06:28.176 02:39:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.176 02:39:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.176 1+0 records in 00:06:28.176 1+0 records out 00:06:28.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227882 s, 18.0 MB/s 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.176 02:39:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:28.176 02:39:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.176 02:39:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.176 02:39:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.176 02:39:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.176 02:39:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.436 { 00:06:28.436 "nbd_device": "/dev/nbd0", 00:06:28.436 "bdev_name": "Malloc0" 00:06:28.436 }, 00:06:28.436 { 00:06:28.436 "nbd_device": "/dev/nbd1", 00:06:28.436 "bdev_name": "Malloc1" 00:06:28.436 } 00:06:28.436 ]' 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.436 { 00:06:28.436 "nbd_device": "/dev/nbd0", 00:06:28.436 "bdev_name": "Malloc0" 00:06:28.436 }, 00:06:28.436 { 00:06:28.436 "nbd_device": "/dev/nbd1", 00:06:28.436 "bdev_name": "Malloc1" 00:06:28.436 } 00:06:28.436 ]' 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.436 /dev/nbd1' 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.436 /dev/nbd1' 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.436 256+0 records in 00:06:28.436 256+0 records out 00:06:28.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136804 s, 76.6 MB/s 00:06:28.436 02:39:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.437 256+0 records in 00:06:28.437 256+0 records out 00:06:28.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235105 s, 44.6 MB/s 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.437 256+0 records in 00:06:28.437 256+0 records out 00:06:28.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213112 s, 49.2 MB/s 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.437 02:39:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.696 02:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.696 02:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.696 02:39:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.696 02:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.696 02:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.696 02:39:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.696 02:39:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.696 02:39:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.696 02:39:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.696 02:39:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.956 02:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.956 02:39:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.956 02:39:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.956 02:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.956 02:39:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.956 02:39:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.956 02:39:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.956 02:39:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.956 02:39:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.956 02:39:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.956 02:39:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.956 02:39:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.956 02:39:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.956 02:39:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.216 02:39:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.216 02:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.216 02:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.216 02:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.216 02:39:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.216 02:39:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.216 02:39:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.216 02:39:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.216 02:39:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.216 02:39:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.476 02:39:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.737 [2024-12-07 02:39:40.596467] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.737 [2024-12-07 02:39:40.658907] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.737 [2024-12-07 02:39:40.658935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.737 [2024-12-07 02:39:40.734926] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.737 [2024-12-07 02:39:40.735098] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:32.292 02:39:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:32.292 02:39:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:32.292 spdk_app_start Round 2 00:06:32.292 02:39:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70441 /var/tmp/spdk-nbd.sock 00:06:32.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.292 02:39:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70441 ']' 00:06:32.292 02:39:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.292 02:39:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.292 02:39:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.292 02:39:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.292 02:39:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.550 02:39:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.551 02:39:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:32.551 02:39:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.808 Malloc0 00:06:32.808 02:39:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.067 Malloc1 00:06:33.067 02:39:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.067 02:39:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.326 /dev/nbd0 00:06:33.326 02:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.326 02:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.326 02:39:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:33.326 02:39:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:33.326 02:39:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.326 02:39:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.326 02:39:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:33.326 02:39:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:33.326 02:39:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.326 02:39:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.327 02:39:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.327 1+0 records in 00:06:33.327 1+0 records out 00:06:33.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311742 s, 13.1 MB/s 00:06:33.327 02:39:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.327 02:39:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:33.327 02:39:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.327 02:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.327 02:39:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:33.327 02:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.327 02:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.327 02:39:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.586 /dev/nbd1 00:06:33.586 02:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.586 02:39:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.586 1+0 records in 00:06:33.586 1+0 records out 00:06:33.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393902 s, 10.4 MB/s 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.586 02:39:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:33.586 02:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.586 02:39:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.586 02:39:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.586 02:39:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.586 02:39:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.845 02:39:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.845 { 00:06:33.846 "nbd_device": "/dev/nbd0", 00:06:33.846 "bdev_name": "Malloc0" 00:06:33.846 }, 00:06:33.846 { 00:06:33.846 "nbd_device": "/dev/nbd1", 00:06:33.846 "bdev_name": "Malloc1" 00:06:33.846 } 00:06:33.846 ]' 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.846 { 00:06:33.846 "nbd_device": "/dev/nbd0", 00:06:33.846 "bdev_name": "Malloc0" 00:06:33.846 }, 00:06:33.846 { 00:06:33.846 "nbd_device": "/dev/nbd1", 00:06:33.846 "bdev_name": "Malloc1" 00:06:33.846 } 00:06:33.846 ]' 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.846 /dev/nbd1' 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.846 /dev/nbd1' 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.846 256+0 records in 00:06:33.846 256+0 records out 00:06:33.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141201 s, 74.3 MB/s 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.846 256+0 records in 00:06:33.846 256+0 records out 00:06:33.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182073 s, 57.6 MB/s 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.846 256+0 records in 00:06:33.846 256+0 records out 00:06:33.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215808 s, 48.6 MB/s 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.846 02:39:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.106 02:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.106 02:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.106 02:39:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.106 02:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.106 02:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.106 02:39:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.106 02:39:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.106 02:39:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.106 02:39:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.106 02:39:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.365 02:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.365 02:39:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.365 02:39:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.365 02:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.365 02:39:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.365 02:39:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.365 02:39:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.365 02:39:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.365 02:39:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.365 02:39:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.365 02:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.365 02:39:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.624 02:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.624 02:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.624 02:39:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.624 02:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.624 02:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.624 02:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.624 02:39:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.624 02:39:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.624 02:39:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.624 02:39:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.624 02:39:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.624 02:39:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.884 02:39:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.142 [2024-12-07 02:39:46.022932] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.142 [2024-12-07 02:39:46.085683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.142 [2024-12-07 02:39:46.085691] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.142 [2024-12-07 02:39:46.162295] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.142 [2024-12-07 02:39:46.162359] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.691 02:39:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70441 /var/tmp/spdk-nbd.sock 00:06:37.691 02:39:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70441 ']' 00:06:37.691 02:39:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.691 02:39:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.691 02:39:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.691 02:39:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.691 02:39:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.951 02:39:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.951 02:39:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:37.951 02:39:48 event.app_repeat -- event/event.sh@39 -- # killprocess 70441 00:06:37.951 02:39:48 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70441 ']' 00:06:37.951 02:39:48 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70441 00:06:37.951 02:39:48 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:37.951 02:39:48 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.951 02:39:48 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70441 00:06:37.951 killing process with pid 70441 00:06:37.951 02:39:49 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.951 02:39:49 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.951 02:39:49 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70441' 00:06:37.951 02:39:49 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70441 00:06:37.951 02:39:49 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70441 00:06:38.521 spdk_app_start is called in Round 0. 00:06:38.521 Shutdown signal received, stop current app iteration 00:06:38.521 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:38.521 spdk_app_start is called in Round 1. 00:06:38.521 Shutdown signal received, stop current app iteration 00:06:38.521 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:38.521 spdk_app_start is called in Round 2. 00:06:38.521 Shutdown signal received, stop current app iteration 00:06:38.521 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 reinitialization... 00:06:38.521 spdk_app_start is called in Round 3. 00:06:38.521 Shutdown signal received, stop current app iteration 00:06:38.521 02:39:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:38.521 02:39:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:38.521 00:06:38.521 real 0m17.502s 00:06:38.521 user 0m37.748s 00:06:38.521 sys 0m2.973s 00:06:38.521 02:39:49 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.521 ************************************ 00:06:38.521 END TEST app_repeat 00:06:38.521 ************************************ 00:06:38.521 02:39:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.521 02:39:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:38.521 02:39:49 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:38.521 02:39:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.521 02:39:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.521 02:39:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.521 ************************************ 00:06:38.521 START TEST cpu_locks 00:06:38.521 ************************************ 00:06:38.521 02:39:49 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:38.521 * Looking for test storage... 00:06:38.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:38.521 02:39:49 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.521 02:39:49 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.521 02:39:49 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.521 02:39:49 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.521 02:39:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.521 02:39:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.521 02:39:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.521 02:39:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.521 02:39:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.521 02:39:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.521 02:39:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.521 02:39:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.521 02:39:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.521 02:39:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.521 02:39:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.521 02:39:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:38.521 02:39:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:38.781 02:39:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.781 02:39:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.781 02:39:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:38.781 02:39:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:38.781 02:39:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.781 02:39:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:38.781 02:39:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.781 02:39:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:38.782 02:39:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:38.782 02:39:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.782 02:39:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:38.782 02:39:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.782 02:39:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.782 02:39:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.782 02:39:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:38.782 02:39:49 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.782 02:39:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.782 --rc genhtml_branch_coverage=1 00:06:38.782 --rc genhtml_function_coverage=1 00:06:38.782 --rc genhtml_legend=1 00:06:38.782 --rc geninfo_all_blocks=1 00:06:38.782 --rc geninfo_unexecuted_blocks=1 00:06:38.782 00:06:38.782 ' 00:06:38.782 02:39:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.782 --rc genhtml_branch_coverage=1 00:06:38.782 --rc genhtml_function_coverage=1 00:06:38.782 --rc genhtml_legend=1 00:06:38.782 --rc geninfo_all_blocks=1 00:06:38.782 --rc geninfo_unexecuted_blocks=1 00:06:38.782 00:06:38.782 ' 00:06:38.782 02:39:49 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.782 --rc genhtml_branch_coverage=1 00:06:38.782 --rc genhtml_function_coverage=1 00:06:38.782 --rc genhtml_legend=1 00:06:38.782 --rc geninfo_all_blocks=1 00:06:38.782 --rc geninfo_unexecuted_blocks=1 00:06:38.782 00:06:38.782 ' 00:06:38.782 02:39:49 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.782 --rc genhtml_branch_coverage=1 00:06:38.782 --rc genhtml_function_coverage=1 00:06:38.782 --rc genhtml_legend=1 00:06:38.782 --rc geninfo_all_blocks=1 00:06:38.782 --rc geninfo_unexecuted_blocks=1 00:06:38.782 00:06:38.782 ' 00:06:38.782 02:39:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:38.782 02:39:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:38.782 02:39:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:38.782 02:39:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:38.782 02:39:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.782 02:39:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.782 02:39:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.782 ************************************ 00:06:38.782 START TEST default_locks 00:06:38.782 ************************************ 00:06:38.782 02:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:38.782 02:39:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70872 00:06:38.782 02:39:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.782 02:39:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70872 00:06:38.782 02:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70872 ']' 00:06:38.782 02:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.782 02:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.782 02:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.782 02:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.782 02:39:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.782 [2024-12-07 02:39:49.723994] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:38.782 [2024-12-07 02:39:49.724221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70872 ] 00:06:39.042 [2024-12-07 02:39:49.885245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.042 [2024-12-07 02:39:49.957438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.611 02:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.611 02:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:39.611 02:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70872 00:06:39.611 02:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70872 00:06:39.611 02:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.871 02:39:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70872 00:06:39.871 02:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70872 ']' 00:06:39.872 02:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70872 00:06:39.872 02:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:39.872 02:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.872 02:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70872 00:06:40.132 killing process with pid 70872 00:06:40.132 02:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.132 02:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.132 02:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70872' 00:06:40.132 02:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70872 00:06:40.132 02:39:50 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70872 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70872 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70872 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70872 00:06:40.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70872 ']' 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.703 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70872) - No such process 00:06:40.703 ERROR: process (pid: 70872) is no longer running 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:40.703 00:06:40.703 real 0m1.992s 00:06:40.703 user 0m1.786s 00:06:40.703 sys 0m0.757s 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.703 02:39:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.703 ************************************ 00:06:40.703 END TEST default_locks 00:06:40.703 ************************************ 00:06:40.703 02:39:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:40.703 02:39:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.703 02:39:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.703 02:39:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.703 ************************************ 00:06:40.703 START TEST default_locks_via_rpc 00:06:40.703 ************************************ 00:06:40.703 02:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:40.703 02:39:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70927 00:06:40.703 02:39:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.703 02:39:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70927 00:06:40.703 02:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70927 ']' 00:06:40.703 02:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.703 02:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.703 02:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.703 02:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.703 02:39:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.963 [2024-12-07 02:39:51.796536] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:40.964 [2024-12-07 02:39:51.796745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70927 ] 00:06:40.964 [2024-12-07 02:39:51.957093] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.964 [2024-12-07 02:39:52.026362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.900 02:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.900 02:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:41.900 02:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70927 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.901 02:39:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70927 00:06:42.159 02:39:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70927 00:06:42.159 02:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70927 ']' 00:06:42.159 02:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70927 00:06:42.159 02:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:42.159 02:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.159 02:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70927 00:06:42.159 killing process with pid 70927 00:06:42.159 02:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.159 02:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.159 02:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70927' 00:06:42.159 02:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70927 00:06:42.159 02:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70927 00:06:43.098 00:06:43.098 real 0m2.124s 00:06:43.098 user 0m1.962s 00:06:43.098 sys 0m0.803s 00:06:43.098 02:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.098 02:39:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.098 ************************************ 00:06:43.098 END TEST default_locks_via_rpc 00:06:43.098 ************************************ 00:06:43.098 02:39:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:43.098 02:39:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.098 02:39:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.098 02:39:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.098 ************************************ 00:06:43.098 START TEST non_locking_app_on_locked_coremask 00:06:43.098 ************************************ 00:06:43.098 02:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:43.098 02:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70980 00:06:43.098 02:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70980 /var/tmp/spdk.sock 00:06:43.098 02:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.098 02:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70980 ']' 00:06:43.098 02:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.098 02:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.098 02:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.098 02:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.098 02:39:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.098 [2024-12-07 02:39:53.989407] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:43.098 [2024-12-07 02:39:53.989640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70980 ] 00:06:43.098 [2024-12-07 02:39:54.148904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.358 [2024-12-07 02:39:54.219111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.929 02:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.929 02:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:43.929 02:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70996 00:06:43.929 02:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70996 /var/tmp/spdk2.sock 00:06:43.929 02:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:43.929 02:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70996 ']' 00:06:43.929 02:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.929 02:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.929 02:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.929 02:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.929 02:39:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.929 [2024-12-07 02:39:54.896814] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:43.929 [2024-12-07 02:39:54.897036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70996 ] 00:06:44.189 [2024-12-07 02:39:55.047822] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.189 [2024-12-07 02:39:55.047895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.189 [2024-12-07 02:39:55.191503] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.172 02:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.172 02:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:45.172 02:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70980 00:06:45.172 02:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70980 00:06:45.172 02:39:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.172 02:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70980 00:06:45.172 02:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70980 ']' 00:06:45.172 02:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70980 00:06:45.172 02:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:45.172 02:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.172 02:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70980 00:06:45.172 killing process with pid 70980 00:06:45.172 02:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.172 02:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.172 02:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70980' 00:06:45.172 02:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70980 00:06:45.172 02:39:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70980 00:06:46.555 02:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70996 00:06:46.555 02:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70996 ']' 00:06:46.555 02:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70996 00:06:46.555 02:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:46.555 02:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.555 02:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70996 00:06:46.555 02:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.555 killing process with pid 70996 00:06:46.555 02:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.555 02:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70996' 00:06:46.555 02:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70996 00:06:46.555 02:39:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70996 00:06:47.125 00:06:47.125 real 0m4.285s 00:06:47.125 user 0m4.099s 00:06:47.125 sys 0m1.346s 00:06:47.125 02:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.125 ************************************ 00:06:47.125 END TEST non_locking_app_on_locked_coremask 00:06:47.125 ************************************ 00:06:47.125 02:39:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.385 02:39:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:47.385 02:39:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.385 02:39:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.385 02:39:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.385 ************************************ 00:06:47.385 START TEST locking_app_on_unlocked_coremask 00:06:47.385 ************************************ 00:06:47.385 02:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:47.385 02:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71065 00:06:47.385 02:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:47.385 02:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71065 /var/tmp/spdk.sock 00:06:47.385 02:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71065 ']' 00:06:47.385 02:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.385 02:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.385 02:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.385 02:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.385 02:39:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.385 [2024-12-07 02:39:58.349026] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:47.385 [2024-12-07 02:39:58.349148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71065 ] 00:06:47.645 [2024-12-07 02:39:58.511107] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.645 [2024-12-07 02:39:58.511178] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.645 [2024-12-07 02:39:58.585513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.213 02:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.214 02:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:48.214 02:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:48.214 02:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71081 00:06:48.214 02:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71081 /var/tmp/spdk2.sock 00:06:48.214 02:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71081 ']' 00:06:48.214 02:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.214 02:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.214 02:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.214 02:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.214 02:39:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.214 [2024-12-07 02:39:59.216860] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:48.214 [2024-12-07 02:39:59.217071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71081 ] 00:06:48.473 [2024-12-07 02:39:59.367091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.473 [2024-12-07 02:39:59.526333] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.412 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.412 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:49.412 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71081 00:06:49.412 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71081 00:06:49.412 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.672 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71065 00:06:49.672 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71065 ']' 00:06:49.672 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71065 00:06:49.672 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:49.672 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.672 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71065 00:06:49.932 killing process with pid 71065 00:06:49.932 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.932 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.932 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71065' 00:06:49.932 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71065 00:06:49.932 02:40:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71065 00:06:51.315 02:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71081 00:06:51.315 02:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71081 ']' 00:06:51.315 02:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 71081 00:06:51.315 02:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:51.315 02:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.315 02:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71081 00:06:51.315 killing process with pid 71081 00:06:51.315 02:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.315 02:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.315 02:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71081' 00:06:51.315 02:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 71081 00:06:51.315 02:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 71081 00:06:51.885 00:06:51.885 real 0m4.456s 00:06:51.885 user 0m4.306s 00:06:51.885 sys 0m1.383s 00:06:51.885 02:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.885 02:40:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.885 ************************************ 00:06:51.885 END TEST locking_app_on_unlocked_coremask 00:06:51.885 ************************************ 00:06:51.885 02:40:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:51.885 02:40:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.885 02:40:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.885 02:40:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.885 ************************************ 00:06:51.885 START TEST locking_app_on_locked_coremask 00:06:51.885 ************************************ 00:06:51.885 02:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:51.885 02:40:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71156 00:06:51.885 02:40:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.885 02:40:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71156 /var/tmp/spdk.sock 00:06:51.885 02:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71156 ']' 00:06:51.885 02:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.885 02:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.885 02:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.885 02:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.885 02:40:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.885 [2024-12-07 02:40:02.877663] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:51.885 [2024-12-07 02:40:02.877801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71156 ] 00:06:52.144 [2024-12-07 02:40:03.039448] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.144 [2024-12-07 02:40:03.116003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71166 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71166 /var/tmp/spdk2.sock 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71166 /var/tmp/spdk2.sock 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71166 /var/tmp/spdk2.sock 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71166 ']' 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.713 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.714 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.714 02:40:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.714 [2024-12-07 02:40:03.752070] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:52.714 [2024-12-07 02:40:03.752299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71166 ] 00:06:52.972 [2024-12-07 02:40:03.902847] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71156 has claimed it. 00:06:52.972 [2024-12-07 02:40:03.902915] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.541 ERROR: process (pid: 71166) is no longer running 00:06:53.541 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71166) - No such process 00:06:53.541 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.541 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:53.541 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:53.541 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.541 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.541 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.541 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71156 00:06:53.541 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.541 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71156 00:06:53.799 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71156 00:06:53.799 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71156 ']' 00:06:53.799 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71156 00:06:53.799 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:53.799 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.799 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71156 00:06:54.058 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.058 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.058 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71156' 00:06:54.058 killing process with pid 71156 00:06:54.058 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71156 00:06:54.058 02:40:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71156 00:06:54.624 00:06:54.624 real 0m2.775s 00:06:54.624 user 0m2.772s 00:06:54.624 sys 0m0.926s 00:06:54.624 02:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.624 02:40:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.624 ************************************ 00:06:54.624 END TEST locking_app_on_locked_coremask 00:06:54.624 ************************************ 00:06:54.624 02:40:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:54.624 02:40:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.624 02:40:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.624 02:40:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.624 ************************************ 00:06:54.624 START TEST locking_overlapped_coremask 00:06:54.624 ************************************ 00:06:54.624 02:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:54.624 02:40:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71219 00:06:54.624 02:40:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:54.624 02:40:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71219 /var/tmp/spdk.sock 00:06:54.624 02:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71219 ']' 00:06:54.624 02:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.624 02:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.624 02:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.624 02:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.624 02:40:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.882 [2024-12-07 02:40:05.714706] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:54.882 [2024-12-07 02:40:05.714925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71219 ] 00:06:54.882 [2024-12-07 02:40:05.875095] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.882 [2024-12-07 02:40:05.948510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.882 [2024-12-07 02:40:05.948472] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.882 [2024-12-07 02:40:05.948552] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71237 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71237 /var/tmp/spdk2.sock 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71237 /var/tmp/spdk2.sock 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:55.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71237 /var/tmp/spdk2.sock 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71237 ']' 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.450 02:40:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.709 [2024-12-07 02:40:06.607261] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:55.709 [2024-12-07 02:40:06.607397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71237 ] 00:06:55.709 [2024-12-07 02:40:06.757616] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71219 has claimed it. 00:06:55.709 [2024-12-07 02:40:06.757701] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:56.277 ERROR: process (pid: 71237) is no longer running 00:06:56.277 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71237) - No such process 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71219 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71219 ']' 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71219 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71219 00:06:56.277 killing process with pid 71219 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71219' 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71219 00:06:56.277 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71219 00:06:57.215 ************************************ 00:06:57.215 END TEST locking_overlapped_coremask 00:06:57.215 ************************************ 00:06:57.215 00:06:57.215 real 0m2.311s 00:06:57.215 user 0m5.850s 00:06:57.215 sys 0m0.710s 00:06:57.215 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.215 02:40:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.215 02:40:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:57.215 02:40:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.215 02:40:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.215 02:40:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.215 ************************************ 00:06:57.215 START TEST locking_overlapped_coremask_via_rpc 00:06:57.215 ************************************ 00:06:57.215 02:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:57.215 02:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71285 00:06:57.215 02:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:57.215 02:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71285 /var/tmp/spdk.sock 00:06:57.215 02:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71285 ']' 00:06:57.215 02:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.215 02:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.215 02:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.216 02:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.216 02:40:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.216 [2024-12-07 02:40:08.098090] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:57.216 [2024-12-07 02:40:08.098291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71285 ] 00:06:57.216 [2024-12-07 02:40:08.247707] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.216 [2024-12-07 02:40:08.247854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.476 [2024-12-07 02:40:08.322963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.476 [2024-12-07 02:40:08.323053] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.476 [2024-12-07 02:40:08.323151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.062 02:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.062 02:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:58.063 02:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71297 00:06:58.063 02:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71297 /var/tmp/spdk2.sock 00:06:58.063 02:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:58.063 02:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71297 ']' 00:06:58.063 02:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.063 02:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.063 02:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.063 02:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.063 02:40:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.063 [2024-12-07 02:40:09.006571] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:06:58.063 [2024-12-07 02:40:09.006796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71297 ] 00:06:58.322 [2024-12-07 02:40:09.159204] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.322 [2024-12-07 02:40:09.159272] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.322 [2024-12-07 02:40:09.256495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.322 [2024-12-07 02:40:09.259779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.322 [2024-12-07 02:40:09.259896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.889 [2024-12-07 02:40:09.817801] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71285 has claimed it. 00:06:58.889 request: 00:06:58.889 { 00:06:58.889 "method": "framework_enable_cpumask_locks", 00:06:58.889 "req_id": 1 00:06:58.889 } 00:06:58.889 Got JSON-RPC error response 00:06:58.889 response: 00:06:58.889 { 00:06:58.889 "code": -32603, 00:06:58.889 "message": "Failed to claim CPU core: 2" 00:06:58.889 } 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71285 /var/tmp/spdk.sock 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71285 ']' 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.889 02:40:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.147 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.147 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:59.148 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71297 /var/tmp/spdk2.sock 00:06:59.148 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71297 ']' 00:06:59.148 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.148 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.148 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.148 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.148 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.407 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.407 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:59.407 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:59.407 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:59.407 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:59.407 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:59.407 00:06:59.407 real 0m2.246s 00:06:59.407 user 0m1.008s 00:06:59.407 sys 0m0.176s 00:06:59.407 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.407 02:40:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.407 ************************************ 00:06:59.407 END TEST locking_overlapped_coremask_via_rpc 00:06:59.407 ************************************ 00:06:59.407 02:40:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:59.407 02:40:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71285 ]] 00:06:59.407 02:40:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71285 00:06:59.407 02:40:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71285 ']' 00:06:59.407 02:40:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71285 00:06:59.407 02:40:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:59.407 02:40:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.407 02:40:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71285 00:06:59.407 02:40:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.407 02:40:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.407 02:40:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71285' 00:06:59.407 killing process with pid 71285 00:06:59.407 02:40:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71285 00:06:59.407 02:40:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71285 00:06:59.975 02:40:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71297 ]] 00:06:59.975 02:40:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71297 00:06:59.975 02:40:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71297 ']' 00:06:59.975 02:40:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71297 00:06:59.975 02:40:11 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:59.975 02:40:11 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.975 02:40:11 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71297 00:06:59.975 02:40:11 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:59.975 02:40:11 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:59.975 02:40:11 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71297' 00:06:59.975 killing process with pid 71297 00:06:59.975 02:40:11 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71297 00:06:59.975 02:40:11 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71297 00:07:00.545 02:40:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.545 02:40:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:00.545 02:40:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71285 ]] 00:07:00.545 02:40:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71285 00:07:00.545 02:40:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71285 ']' 00:07:00.545 Process with pid 71285 is not found 00:07:00.545 02:40:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71285 00:07:00.545 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71285) - No such process 00:07:00.545 02:40:11 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71285 is not found' 00:07:00.545 02:40:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71297 ]] 00:07:00.545 02:40:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71297 00:07:00.545 02:40:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71297 ']' 00:07:00.545 02:40:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71297 00:07:00.545 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71297) - No such process 00:07:00.545 Process with pid 71297 is not found 00:07:00.545 02:40:11 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71297 is not found' 00:07:00.545 02:40:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.545 00:07:00.545 real 0m22.069s 00:07:00.545 user 0m33.995s 00:07:00.545 sys 0m7.429s 00:07:00.545 02:40:11 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.545 02:40:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.545 ************************************ 00:07:00.545 END TEST cpu_locks 00:07:00.545 ************************************ 00:07:00.545 00:07:00.545 real 0m50.706s 00:07:00.545 user 1m32.894s 00:07:00.545 sys 0m11.678s 00:07:00.545 02:40:11 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.545 02:40:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.545 ************************************ 00:07:00.545 END TEST event 00:07:00.545 ************************************ 00:07:00.545 02:40:11 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:00.545 02:40:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.545 02:40:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.545 02:40:11 -- common/autotest_common.sh@10 -- # set +x 00:07:00.545 ************************************ 00:07:00.545 START TEST thread 00:07:00.545 ************************************ 00:07:00.545 02:40:11 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:00.806 * Looking for test storage... 00:07:00.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:00.806 02:40:11 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:00.806 02:40:11 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:00.806 02:40:11 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:00.806 02:40:11 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:00.806 02:40:11 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.806 02:40:11 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.806 02:40:11 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.806 02:40:11 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.806 02:40:11 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.806 02:40:11 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.806 02:40:11 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.806 02:40:11 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.806 02:40:11 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.806 02:40:11 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.806 02:40:11 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.806 02:40:11 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:00.806 02:40:11 thread -- scripts/common.sh@345 -- # : 1 00:07:00.806 02:40:11 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.806 02:40:11 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.806 02:40:11 thread -- scripts/common.sh@365 -- # decimal 1 00:07:00.806 02:40:11 thread -- scripts/common.sh@353 -- # local d=1 00:07:00.806 02:40:11 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.806 02:40:11 thread -- scripts/common.sh@355 -- # echo 1 00:07:00.806 02:40:11 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.806 02:40:11 thread -- scripts/common.sh@366 -- # decimal 2 00:07:00.806 02:40:11 thread -- scripts/common.sh@353 -- # local d=2 00:07:00.806 02:40:11 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.806 02:40:11 thread -- scripts/common.sh@355 -- # echo 2 00:07:00.806 02:40:11 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.806 02:40:11 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.806 02:40:11 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.806 02:40:11 thread -- scripts/common.sh@368 -- # return 0 00:07:00.806 02:40:11 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.806 02:40:11 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:00.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.806 --rc genhtml_branch_coverage=1 00:07:00.806 --rc genhtml_function_coverage=1 00:07:00.806 --rc genhtml_legend=1 00:07:00.806 --rc geninfo_all_blocks=1 00:07:00.806 --rc geninfo_unexecuted_blocks=1 00:07:00.806 00:07:00.806 ' 00:07:00.806 02:40:11 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:00.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.806 --rc genhtml_branch_coverage=1 00:07:00.806 --rc genhtml_function_coverage=1 00:07:00.806 --rc genhtml_legend=1 00:07:00.806 --rc geninfo_all_blocks=1 00:07:00.806 --rc geninfo_unexecuted_blocks=1 00:07:00.806 00:07:00.806 ' 00:07:00.806 02:40:11 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:00.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.806 --rc genhtml_branch_coverage=1 00:07:00.806 --rc genhtml_function_coverage=1 00:07:00.806 --rc genhtml_legend=1 00:07:00.806 --rc geninfo_all_blocks=1 00:07:00.806 --rc geninfo_unexecuted_blocks=1 00:07:00.806 00:07:00.806 ' 00:07:00.806 02:40:11 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:00.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.806 --rc genhtml_branch_coverage=1 00:07:00.806 --rc genhtml_function_coverage=1 00:07:00.806 --rc genhtml_legend=1 00:07:00.806 --rc geninfo_all_blocks=1 00:07:00.806 --rc geninfo_unexecuted_blocks=1 00:07:00.806 00:07:00.806 ' 00:07:00.806 02:40:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.806 02:40:11 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:00.806 02:40:11 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.806 02:40:11 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.806 ************************************ 00:07:00.806 START TEST thread_poller_perf 00:07:00.806 ************************************ 00:07:00.806 02:40:11 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.806 [2024-12-07 02:40:11.865381] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:00.806 [2024-12-07 02:40:11.865557] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71437 ] 00:07:01.066 [2024-12-07 02:40:12.025165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.066 [2024-12-07 02:40:12.101188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.066 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:02.450 [2024-12-07T02:40:13.528Z] ====================================== 00:07:02.450 [2024-12-07T02:40:13.528Z] busy:2296748596 (cyc) 00:07:02.450 [2024-12-07T02:40:13.528Z] total_run_count: 409000 00:07:02.450 [2024-12-07T02:40:13.528Z] tsc_hz: 2290000000 (cyc) 00:07:02.450 [2024-12-07T02:40:13.528Z] ====================================== 00:07:02.450 [2024-12-07T02:40:13.528Z] poller_cost: 5615 (cyc), 2451 (nsec) 00:07:02.450 ************************************ 00:07:02.450 END TEST thread_poller_perf 00:07:02.450 ************************************ 00:07:02.450 00:07:02.450 real 0m1.421s 00:07:02.450 user 0m1.194s 00:07:02.450 sys 0m0.120s 00:07:02.450 02:40:13 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.450 02:40:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:02.450 02:40:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:02.450 02:40:13 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:02.450 02:40:13 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.450 02:40:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.450 ************************************ 00:07:02.450 START TEST thread_poller_perf 00:07:02.450 ************************************ 00:07:02.450 02:40:13 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:02.450 [2024-12-07 02:40:13.352992] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:02.450 [2024-12-07 02:40:13.353177] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71478 ] 00:07:02.450 [2024-12-07 02:40:13.511017] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.711 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:02.711 [2024-12-07 02:40:13.583184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.650 [2024-12-07T02:40:14.728Z] ====================================== 00:07:03.650 [2024-12-07T02:40:14.728Z] busy:2293560936 (cyc) 00:07:03.650 [2024-12-07T02:40:14.728Z] total_run_count: 5502000 00:07:03.650 [2024-12-07T02:40:14.728Z] tsc_hz: 2290000000 (cyc) 00:07:03.650 [2024-12-07T02:40:14.728Z] ====================================== 00:07:03.650 [2024-12-07T02:40:14.728Z] poller_cost: 416 (cyc), 181 (nsec) 00:07:03.650 00:07:03.650 real 0m1.408s 00:07:03.650 user 0m1.191s 00:07:03.651 sys 0m0.111s 00:07:03.651 02:40:14 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.651 02:40:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.651 ************************************ 00:07:03.651 END TEST thread_poller_perf 00:07:03.651 ************************************ 00:07:03.911 02:40:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:03.911 ************************************ 00:07:03.911 END TEST thread 00:07:03.911 ************************************ 00:07:03.911 00:07:03.911 real 0m3.187s 00:07:03.911 user 0m2.550s 00:07:03.911 sys 0m0.438s 00:07:03.911 02:40:14 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.911 02:40:14 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.911 02:40:14 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:03.911 02:40:14 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:03.911 02:40:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.911 02:40:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.911 02:40:14 -- common/autotest_common.sh@10 -- # set +x 00:07:03.911 ************************************ 00:07:03.911 START TEST app_cmdline 00:07:03.911 ************************************ 00:07:03.911 02:40:14 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:03.911 * Looking for test storage... 00:07:03.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:03.911 02:40:14 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.911 02:40:14 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.911 02:40:14 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:04.177 02:40:15 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.177 02:40:15 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:04.177 02:40:15 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.177 02:40:15 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:04.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.177 --rc genhtml_branch_coverage=1 00:07:04.177 --rc genhtml_function_coverage=1 00:07:04.177 --rc genhtml_legend=1 00:07:04.177 --rc geninfo_all_blocks=1 00:07:04.177 --rc geninfo_unexecuted_blocks=1 00:07:04.177 00:07:04.177 ' 00:07:04.177 02:40:15 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:04.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.177 --rc genhtml_branch_coverage=1 00:07:04.177 --rc genhtml_function_coverage=1 00:07:04.177 --rc genhtml_legend=1 00:07:04.177 --rc geninfo_all_blocks=1 00:07:04.177 --rc geninfo_unexecuted_blocks=1 00:07:04.177 00:07:04.177 ' 00:07:04.177 02:40:15 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:04.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.177 --rc genhtml_branch_coverage=1 00:07:04.177 --rc genhtml_function_coverage=1 00:07:04.177 --rc genhtml_legend=1 00:07:04.177 --rc geninfo_all_blocks=1 00:07:04.177 --rc geninfo_unexecuted_blocks=1 00:07:04.177 00:07:04.177 ' 00:07:04.177 02:40:15 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:04.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.177 --rc genhtml_branch_coverage=1 00:07:04.177 --rc genhtml_function_coverage=1 00:07:04.177 --rc genhtml_legend=1 00:07:04.177 --rc geninfo_all_blocks=1 00:07:04.177 --rc geninfo_unexecuted_blocks=1 00:07:04.177 00:07:04.177 ' 00:07:04.177 02:40:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:04.177 02:40:15 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:04.177 02:40:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71557 00:07:04.177 02:40:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71557 00:07:04.177 02:40:15 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71557 ']' 00:07:04.177 02:40:15 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.177 02:40:15 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.177 02:40:15 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.177 02:40:15 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.177 02:40:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.177 [2024-12-07 02:40:15.142285] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:04.177 [2024-12-07 02:40:15.142880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71557 ] 00:07:04.441 [2024-12-07 02:40:15.305331] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.441 [2024-12-07 02:40:15.379021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.011 02:40:15 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.011 02:40:15 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:05.011 02:40:15 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:05.271 { 00:07:05.271 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:05.271 "fields": { 00:07:05.271 "major": 24, 00:07:05.271 "minor": 9, 00:07:05.271 "patch": 1, 00:07:05.271 "suffix": "-pre", 00:07:05.271 "commit": "b18e1bd62" 00:07:05.271 } 00:07:05.271 } 00:07:05.271 02:40:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:05.271 02:40:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:05.271 02:40:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:05.271 02:40:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:05.271 02:40:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:05.271 02:40:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:05.272 02:40:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.272 02:40:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:05.272 02:40:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:05.272 02:40:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:05.272 02:40:16 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:05.531 request: 00:07:05.531 { 00:07:05.531 "method": "env_dpdk_get_mem_stats", 00:07:05.531 "req_id": 1 00:07:05.531 } 00:07:05.531 Got JSON-RPC error response 00:07:05.531 response: 00:07:05.531 { 00:07:05.531 "code": -32601, 00:07:05.531 "message": "Method not found" 00:07:05.531 } 00:07:05.531 02:40:16 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:05.532 02:40:16 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.532 02:40:16 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.532 02:40:16 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.532 02:40:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71557 00:07:05.532 02:40:16 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71557 ']' 00:07:05.532 02:40:16 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71557 00:07:05.532 02:40:16 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:05.532 02:40:16 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.532 02:40:16 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71557 00:07:05.532 02:40:16 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.532 02:40:16 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.532 02:40:16 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71557' 00:07:05.532 killing process with pid 71557 00:07:05.532 02:40:16 app_cmdline -- common/autotest_common.sh@969 -- # kill 71557 00:07:05.532 02:40:16 app_cmdline -- common/autotest_common.sh@974 -- # wait 71557 00:07:06.100 00:07:06.101 real 0m2.247s 00:07:06.101 user 0m2.327s 00:07:06.101 sys 0m0.665s 00:07:06.101 02:40:17 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.101 02:40:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.101 ************************************ 00:07:06.101 END TEST app_cmdline 00:07:06.101 ************************************ 00:07:06.101 02:40:17 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:06.101 02:40:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.101 02:40:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.101 02:40:17 -- common/autotest_common.sh@10 -- # set +x 00:07:06.101 ************************************ 00:07:06.101 START TEST version 00:07:06.101 ************************************ 00:07:06.101 02:40:17 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:06.361 * Looking for test storage... 00:07:06.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:06.361 02:40:17 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:06.361 02:40:17 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:06.361 02:40:17 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:06.361 02:40:17 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:06.361 02:40:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.361 02:40:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.361 02:40:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.361 02:40:17 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.361 02:40:17 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.361 02:40:17 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.361 02:40:17 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.361 02:40:17 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.361 02:40:17 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.361 02:40:17 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.361 02:40:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.361 02:40:17 version -- scripts/common.sh@344 -- # case "$op" in 00:07:06.361 02:40:17 version -- scripts/common.sh@345 -- # : 1 00:07:06.361 02:40:17 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.361 02:40:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.361 02:40:17 version -- scripts/common.sh@365 -- # decimal 1 00:07:06.361 02:40:17 version -- scripts/common.sh@353 -- # local d=1 00:07:06.361 02:40:17 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.361 02:40:17 version -- scripts/common.sh@355 -- # echo 1 00:07:06.361 02:40:17 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.361 02:40:17 version -- scripts/common.sh@366 -- # decimal 2 00:07:06.361 02:40:17 version -- scripts/common.sh@353 -- # local d=2 00:07:06.361 02:40:17 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.361 02:40:17 version -- scripts/common.sh@355 -- # echo 2 00:07:06.361 02:40:17 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.361 02:40:17 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.361 02:40:17 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.361 02:40:17 version -- scripts/common.sh@368 -- # return 0 00:07:06.361 02:40:17 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.361 02:40:17 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:06.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.361 --rc genhtml_branch_coverage=1 00:07:06.361 --rc genhtml_function_coverage=1 00:07:06.361 --rc genhtml_legend=1 00:07:06.361 --rc geninfo_all_blocks=1 00:07:06.361 --rc geninfo_unexecuted_blocks=1 00:07:06.361 00:07:06.361 ' 00:07:06.361 02:40:17 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:06.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.361 --rc genhtml_branch_coverage=1 00:07:06.361 --rc genhtml_function_coverage=1 00:07:06.361 --rc genhtml_legend=1 00:07:06.361 --rc geninfo_all_blocks=1 00:07:06.361 --rc geninfo_unexecuted_blocks=1 00:07:06.361 00:07:06.361 ' 00:07:06.361 02:40:17 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:06.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.361 --rc genhtml_branch_coverage=1 00:07:06.361 --rc genhtml_function_coverage=1 00:07:06.361 --rc genhtml_legend=1 00:07:06.361 --rc geninfo_all_blocks=1 00:07:06.361 --rc geninfo_unexecuted_blocks=1 00:07:06.361 00:07:06.361 ' 00:07:06.361 02:40:17 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:06.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.361 --rc genhtml_branch_coverage=1 00:07:06.361 --rc genhtml_function_coverage=1 00:07:06.361 --rc genhtml_legend=1 00:07:06.361 --rc geninfo_all_blocks=1 00:07:06.361 --rc geninfo_unexecuted_blocks=1 00:07:06.361 00:07:06.361 ' 00:07:06.361 02:40:17 version -- app/version.sh@17 -- # get_header_version major 00:07:06.361 02:40:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:06.361 02:40:17 version -- app/version.sh@14 -- # cut -f2 00:07:06.361 02:40:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.361 02:40:17 version -- app/version.sh@17 -- # major=24 00:07:06.361 02:40:17 version -- app/version.sh@18 -- # get_header_version minor 00:07:06.361 02:40:17 version -- app/version.sh@14 -- # cut -f2 00:07:06.361 02:40:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:06.361 02:40:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.361 02:40:17 version -- app/version.sh@18 -- # minor=9 00:07:06.361 02:40:17 version -- app/version.sh@19 -- # get_header_version patch 00:07:06.361 02:40:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:06.361 02:40:17 version -- app/version.sh@14 -- # cut -f2 00:07:06.361 02:40:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.361 02:40:17 version -- app/version.sh@19 -- # patch=1 00:07:06.361 02:40:17 version -- app/version.sh@20 -- # get_header_version suffix 00:07:06.361 02:40:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:06.361 02:40:17 version -- app/version.sh@14 -- # cut -f2 00:07:06.361 02:40:17 version -- app/version.sh@14 -- # tr -d '"' 00:07:06.361 02:40:17 version -- app/version.sh@20 -- # suffix=-pre 00:07:06.361 02:40:17 version -- app/version.sh@22 -- # version=24.9 00:07:06.361 02:40:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:06.361 02:40:17 version -- app/version.sh@25 -- # version=24.9.1 00:07:06.361 02:40:17 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:06.361 02:40:17 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:06.361 02:40:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:06.620 02:40:17 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:06.620 02:40:17 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:06.620 ************************************ 00:07:06.620 END TEST version 00:07:06.620 ************************************ 00:07:06.620 00:07:06.620 real 0m0.314s 00:07:06.620 user 0m0.195s 00:07:06.620 sys 0m0.171s 00:07:06.620 02:40:17 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.620 02:40:17 version -- common/autotest_common.sh@10 -- # set +x 00:07:06.620 02:40:17 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:06.620 02:40:17 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:06.620 02:40:17 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:06.620 02:40:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.620 02:40:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.620 02:40:17 -- common/autotest_common.sh@10 -- # set +x 00:07:06.620 ************************************ 00:07:06.620 START TEST bdev_raid 00:07:06.620 ************************************ 00:07:06.620 02:40:17 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:06.620 * Looking for test storage... 00:07:06.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:06.620 02:40:17 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:06.620 02:40:17 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:07:06.620 02:40:17 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:06.881 02:40:17 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.881 02:40:17 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:06.881 02:40:17 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.881 02:40:17 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:06.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.881 --rc genhtml_branch_coverage=1 00:07:06.881 --rc genhtml_function_coverage=1 00:07:06.881 --rc genhtml_legend=1 00:07:06.881 --rc geninfo_all_blocks=1 00:07:06.881 --rc geninfo_unexecuted_blocks=1 00:07:06.881 00:07:06.881 ' 00:07:06.881 02:40:17 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:06.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.881 --rc genhtml_branch_coverage=1 00:07:06.881 --rc genhtml_function_coverage=1 00:07:06.881 --rc genhtml_legend=1 00:07:06.881 --rc geninfo_all_blocks=1 00:07:06.881 --rc geninfo_unexecuted_blocks=1 00:07:06.881 00:07:06.881 ' 00:07:06.881 02:40:17 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:06.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.881 --rc genhtml_branch_coverage=1 00:07:06.881 --rc genhtml_function_coverage=1 00:07:06.881 --rc genhtml_legend=1 00:07:06.881 --rc geninfo_all_blocks=1 00:07:06.881 --rc geninfo_unexecuted_blocks=1 00:07:06.881 00:07:06.881 ' 00:07:06.881 02:40:17 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:06.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.881 --rc genhtml_branch_coverage=1 00:07:06.881 --rc genhtml_function_coverage=1 00:07:06.881 --rc genhtml_legend=1 00:07:06.881 --rc geninfo_all_blocks=1 00:07:06.881 --rc geninfo_unexecuted_blocks=1 00:07:06.881 00:07:06.881 ' 00:07:06.881 02:40:17 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:06.881 02:40:17 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:06.881 02:40:17 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:06.881 02:40:17 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:06.881 02:40:17 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:06.881 02:40:17 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:06.881 02:40:17 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:06.881 02:40:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.881 02:40:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.881 02:40:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.881 ************************************ 00:07:06.881 START TEST raid1_resize_data_offset_test 00:07:06.881 ************************************ 00:07:06.881 Process raid pid: 71728 00:07:06.881 02:40:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:07:06.881 02:40:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71728 00:07:06.881 02:40:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71728' 00:07:06.881 02:40:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:06.881 02:40:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71728 00:07:06.881 02:40:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71728 ']' 00:07:06.881 02:40:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.881 02:40:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.881 02:40:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.881 02:40:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.881 02:40:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.881 [2024-12-07 02:40:17.862210] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:06.881 [2024-12-07 02:40:17.862454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.140 [2024-12-07 02:40:18.023665] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.140 [2024-12-07 02:40:18.093111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.140 [2024-12-07 02:40:18.170869] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.140 [2024-12-07 02:40:18.171023] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.709 malloc0 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.709 malloc1 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.709 null0 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.709 [2024-12-07 02:40:18.776364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:07.709 [2024-12-07 02:40:18.778485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:07.709 [2024-12-07 02:40:18.778564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:07.709 [2024-12-07 02:40:18.778760] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:07.709 [2024-12-07 02:40:18.778818] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:07.709 [2024-12-07 02:40:18.779123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:07.709 [2024-12-07 02:40:18.779309] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:07.709 [2024-12-07 02:40:18.779353] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:07.709 [2024-12-07 02:40:18.779534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:07.709 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.968 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.968 02:40:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:07.968 02:40:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:07.968 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.968 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.968 [2024-12-07 02:40:18.836214] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:07.968 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.968 02:40:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:07.968 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.968 02:40:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.228 malloc2 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.228 [2024-12-07 02:40:19.057681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:08.228 [2024-12-07 02:40:19.065257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.228 [2024-12-07 02:40:19.067394] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71728 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71728 ']' 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71728 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71728 00:07:08.228 killing process with pid 71728 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71728' 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71728 00:07:08.228 [2024-12-07 02:40:19.158271] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.228 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71728 00:07:08.228 [2024-12-07 02:40:19.159027] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:08.228 [2024-12-07 02:40:19.159094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.228 [2024-12-07 02:40:19.159113] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:08.228 [2024-12-07 02:40:19.166549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.228 [2024-12-07 02:40:19.166962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.228 [2024-12-07 02:40:19.166986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:08.488 [2024-12-07 02:40:19.559239] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.057 ************************************ 00:07:09.057 END TEST raid1_resize_data_offset_test 00:07:09.057 ************************************ 00:07:09.057 02:40:19 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:09.057 00:07:09.057 real 0m2.149s 00:07:09.057 user 0m1.947s 00:07:09.057 sys 0m0.628s 00:07:09.057 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.057 02:40:19 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.057 02:40:19 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:09.057 02:40:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:09.057 02:40:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.057 02:40:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.057 ************************************ 00:07:09.057 START TEST raid0_resize_superblock_test 00:07:09.057 ************************************ 00:07:09.057 02:40:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:07:09.057 02:40:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:09.057 02:40:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71784 00:07:09.057 Process raid pid: 71784 00:07:09.057 02:40:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:09.057 02:40:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71784' 00:07:09.057 02:40:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71784 00:07:09.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.057 02:40:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71784 ']' 00:07:09.057 02:40:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.057 02:40:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.057 02:40:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.057 02:40:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.057 02:40:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.057 [2024-12-07 02:40:20.075219] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:09.057 [2024-12-07 02:40:20.075441] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.317 [2024-12-07 02:40:20.237413] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.317 [2024-12-07 02:40:20.310803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.317 [2024-12-07 02:40:20.389276] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.317 [2024-12-07 02:40:20.389402] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.888 02:40:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.888 02:40:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:09.888 02:40:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:09.888 02:40:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.888 02:40:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.149 malloc0 00:07:10.149 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.149 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:10.149 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.149 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.149 [2024-12-07 02:40:21.105709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:10.149 [2024-12-07 02:40:21.105785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.149 [2024-12-07 02:40:21.105817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:10.149 [2024-12-07 02:40:21.105832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.149 [2024-12-07 02:40:21.108182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.149 [2024-12-07 02:40:21.108317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:10.149 pt0 00:07:10.149 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.149 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:10.149 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.149 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.409 bc8a1140-617a-478b-945e-4b4c93a4da64 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.409 8a34c09c-0dd2-4106-8ed3-cf0315efe8e8 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.409 59921ead-c243-4eed-ab6d-9c813c3ed666 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.409 [2024-12-07 02:40:21.314690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8a34c09c-0dd2-4106-8ed3-cf0315efe8e8 is claimed 00:07:10.409 [2024-12-07 02:40:21.314784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 59921ead-c243-4eed-ab6d-9c813c3ed666 is claimed 00:07:10.409 [2024-12-07 02:40:21.314896] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:10.409 [2024-12-07 02:40:21.314909] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:10.409 [2024-12-07 02:40:21.315178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:10.409 [2024-12-07 02:40:21.315338] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:10.409 [2024-12-07 02:40:21.315348] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:10.409 [2024-12-07 02:40:21.315485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.409 [2024-12-07 02:40:21.426803] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.409 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.410 [2024-12-07 02:40:21.454677] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:10.410 [2024-12-07 02:40:21.454710] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8a34c09c-0dd2-4106-8ed3-cf0315efe8e8' was resized: old size 131072, new size 204800 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.410 [2024-12-07 02:40:21.466488] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:10.410 [2024-12-07 02:40:21.466573] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '59921ead-c243-4eed-ab6d-9c813c3ed666' was resized: old size 131072, new size 204800 00:07:10.410 [2024-12-07 02:40:21.466615] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.410 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:10.670 [2024-12-07 02:40:21.570388] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.670 [2024-12-07 02:40:21.618125] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:10.670 [2024-12-07 02:40:21.618235] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:10.670 [2024-12-07 02:40:21.618263] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:10.670 [2024-12-07 02:40:21.618296] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:10.670 [2024-12-07 02:40:21.618452] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.670 [2024-12-07 02:40:21.618547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.670 [2024-12-07 02:40:21.618603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.670 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.670 [2024-12-07 02:40:21.630027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:10.670 [2024-12-07 02:40:21.630088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.670 [2024-12-07 02:40:21.630112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:10.670 [2024-12-07 02:40:21.630132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.670 [2024-12-07 02:40:21.632558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.670 [2024-12-07 02:40:21.632604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:10.670 [2024-12-07 02:40:21.634035] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8a34c09c-0dd2-4106-pt0 00:07:10.670 8ed3-cf0315efe8e8 00:07:10.670 [2024-12-07 02:40:21.634152] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8a34c09c-0dd2-4106-8ed3-cf0315efe8e8 is claimed 00:07:10.670 [2024-12-07 02:40:21.634246] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 59921ead-c243-4eed-ab6d-9c813c3ed666 00:07:10.670 [2024-12-07 02:40:21.634269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 59921ead-c243-4eed-ab6d-9c813c3ed666 is claimed 00:07:10.670 [2024-12-07 02:40:21.634378] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 59921ead-c243-4eed-ab6d-9c813c3ed666 (2) smaller than existing raid bdev Raid (3) 00:07:10.670 [2024-12-07 02:40:21.634399] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 8a34c09c-0dd2-4106-8ed3-cf0315efe8e8: File exists 00:07:10.670 [2024-12-07 02:40:21.634441] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:07:10.670 [2024-12-07 02:40:21.634450] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:10.670 [2024-12-07 02:40:21.634687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:10.671 [2024-12-07 02:40:21.634809] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:07:10.671 [2024-12-07 02:40:21.634817] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:07:10.671 [2024-12-07 02:40:21.634924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:10.671 [2024-12-07 02:40:21.654277] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71784 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71784 ']' 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71784 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71784 00:07:10.671 killing process with pid 71784 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71784' 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71784 00:07:10.671 [2024-12-07 02:40:21.737336] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.671 [2024-12-07 02:40:21.737391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.671 [2024-12-07 02:40:21.737425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.671 [2024-12-07 02:40:21.737432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:07:10.671 02:40:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71784 00:07:11.250 [2024-12-07 02:40:22.041912] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.510 ************************************ 00:07:11.510 END TEST raid0_resize_superblock_test 00:07:11.510 ************************************ 00:07:11.510 02:40:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:11.510 00:07:11.510 real 0m2.418s 00:07:11.510 user 0m2.487s 00:07:11.510 sys 0m0.668s 00:07:11.510 02:40:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.510 02:40:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.510 02:40:22 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:11.510 02:40:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:11.510 02:40:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.510 02:40:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.510 ************************************ 00:07:11.510 START TEST raid1_resize_superblock_test 00:07:11.510 ************************************ 00:07:11.510 02:40:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:07:11.510 02:40:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:11.510 Process raid pid: 71861 00:07:11.510 02:40:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71861 00:07:11.510 02:40:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71861' 00:07:11.510 02:40:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:11.510 02:40:22 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71861 00:07:11.510 02:40:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71861 ']' 00:07:11.510 02:40:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.510 02:40:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.510 02:40:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.510 02:40:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.510 02:40:22 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.510 [2024-12-07 02:40:22.553404] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:11.510 [2024-12-07 02:40:22.553635] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.770 [2024-12-07 02:40:22.715115] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.770 [2024-12-07 02:40:22.783111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.030 [2024-12-07 02:40:22.859171] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.030 [2024-12-07 02:40:22.859319] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.600 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.600 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:12.600 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:12.600 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.600 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.600 malloc0 00:07:12.600 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.600 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:12.600 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.600 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.600 [2024-12-07 02:40:23.590452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:12.600 [2024-12-07 02:40:23.590533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.600 [2024-12-07 02:40:23.590556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:12.600 [2024-12-07 02:40:23.590568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.600 [2024-12-07 02:40:23.592923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.600 [2024-12-07 02:40:23.593035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:12.600 pt0 00:07:12.600 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.600 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:12.600 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.601 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.860 375e9f52-c07e-468d-a1e2-bb31586cee96 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.860 afa4aebc-1536-4960-b8ee-cf68e04a4cb5 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.860 cef2da2f-cd0a-416b-8d3e-0e791231096f 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.860 [2024-12-07 02:40:23.796849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev afa4aebc-1536-4960-b8ee-cf68e04a4cb5 is claimed 00:07:12.860 [2024-12-07 02:40:23.796945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev cef2da2f-cd0a-416b-8d3e-0e791231096f is claimed 00:07:12.860 [2024-12-07 02:40:23.797054] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:12.860 [2024-12-07 02:40:23.797074] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:12.860 [2024-12-07 02:40:23.797334] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:12.860 [2024-12-07 02:40:23.797488] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:12.860 [2024-12-07 02:40:23.797498] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:12.860 [2024-12-07 02:40:23.797653] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.860 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.860 [2024-12-07 02:40:23.920868] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.121 [2024-12-07 02:40:23.968674] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:13.121 [2024-12-07 02:40:23.968748] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'afa4aebc-1536-4960-b8ee-cf68e04a4cb5' was resized: old size 131072, new size 204800 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.121 [2024-12-07 02:40:23.980564] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:13.121 [2024-12-07 02:40:23.980644] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'cef2da2f-cd0a-416b-8d3e-0e791231096f' was resized: old size 131072, new size 204800 00:07:13.121 [2024-12-07 02:40:23.980712] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.121 02:40:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.121 [2024-12-07 02:40:24.072507] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.121 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.121 [2024-12-07 02:40:24.100290] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:13.121 [2024-12-07 02:40:24.100361] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:13.121 [2024-12-07 02:40:24.100388] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:13.121 [2024-12-07 02:40:24.100516] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:13.122 [2024-12-07 02:40:24.100654] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.122 [2024-12-07 02:40:24.100709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.122 [2024-12-07 02:40:24.100722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.122 [2024-12-07 02:40:24.112222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:13.122 [2024-12-07 02:40:24.112280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:13.122 [2024-12-07 02:40:24.112300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:13.122 [2024-12-07 02:40:24.112313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:13.122 [2024-12-07 02:40:24.114554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:13.122 [2024-12-07 02:40:24.114599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:13.122 [2024-12-07 02:40:24.115915] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev afa4aebc-1536-4960-b8ee-cf68e04a4cb5 00:07:13.122 [2024-12-07 02:40:24.115974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev afa4aebc-1536-4960-b8ee-cf68e04a4cb5 is claimed 00:07:13.122 [2024-12-07 02:40:24.116047] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev cef2da2f-cd0a-416b-8d3e-0e791231096f 00:07:13.122 [2024-12-07 02:40:24.116067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev cef2da2f-cd0a-416b-8d3e-0e791231096f is claimed 00:07:13.122 [2024-12-07 02:40:24.116140] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev cef2da2f-cd0a-416b-8d3e-0e791231096f (2) smaller than existing raid bdev Raid (3) 00:07:13.122 [2024-12-07 02:40:24.116160] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev afa4aebc-1536-4960-b8ee-cf68e04a4cb5: File exists 00:07:13.122 [2024-12-07 02:40:24.116200] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:07:13.122 [2024-12-07 02:40:24.116210] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:13.122 [2024-12-07 02:40:24.116435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:13.122 [2024-12-07 02:40:24.116548] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:07:13.122 [2024-12-07 02:40:24.116556] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006600 00:07:13.122 [2024-12-07 02:40:24.116697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.122 pt0 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.122 [2024-12-07 02:40:24.140843] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71861 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71861 ']' 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71861 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.122 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71861 00:07:13.382 killing process with pid 71861 00:07:13.382 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.382 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.382 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71861' 00:07:13.382 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71861 00:07:13.382 [2024-12-07 02:40:24.226913] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.382 [2024-12-07 02:40:24.226962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.382 [2024-12-07 02:40:24.226998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.382 [2024-12-07 02:40:24.227007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Raid, state offline 00:07:13.382 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71861 00:07:13.645 [2024-12-07 02:40:24.529836] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.904 ************************************ 00:07:13.904 END TEST raid1_resize_superblock_test 00:07:13.904 ************************************ 00:07:13.904 02:40:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:13.904 00:07:13.904 real 0m2.420s 00:07:13.904 user 0m2.486s 00:07:13.904 sys 0m0.691s 00:07:13.904 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.904 02:40:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.904 02:40:24 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:13.904 02:40:24 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:13.904 02:40:24 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:13.904 02:40:24 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:13.904 02:40:24 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:13.904 02:40:24 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:13.904 02:40:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.904 02:40:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.904 02:40:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.163 ************************************ 00:07:14.163 START TEST raid_function_test_raid0 00:07:14.163 ************************************ 00:07:14.163 02:40:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:07:14.163 02:40:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:14.163 02:40:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:14.163 02:40:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:14.163 Process raid pid: 71941 00:07:14.163 02:40:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71941 00:07:14.163 02:40:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:14.163 02:40:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71941' 00:07:14.163 02:40:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71941 00:07:14.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.164 02:40:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 71941 ']' 00:07:14.164 02:40:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.164 02:40:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.164 02:40:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.164 02:40:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.164 02:40:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:14.164 [2024-12-07 02:40:25.070618] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:14.164 [2024-12-07 02:40:25.071129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.164 [2024-12-07 02:40:25.230502] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.422 [2024-12-07 02:40:25.298427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.422 [2024-12-07 02:40:25.373610] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.422 [2024-12-07 02:40:25.373736] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:14.989 Base_1 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:14.989 Base_2 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:14.989 [2024-12-07 02:40:25.943468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:14.989 [2024-12-07 02:40:25.945571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:14.989 [2024-12-07 02:40:25.945655] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:14.989 [2024-12-07 02:40:25.945672] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:14.989 [2024-12-07 02:40:25.945947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:14.989 [2024-12-07 02:40:25.946083] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:14.989 [2024-12-07 02:40:25.946092] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:14.989 [2024-12-07 02:40:25.946212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:14.989 02:40:25 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:14.989 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:14.989 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:15.248 [2024-12-07 02:40:26.171039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:15.248 /dev/nbd0 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.248 1+0 records in 00:07:15.248 1+0 records out 00:07:15.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348044 s, 11.8 MB/s 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:15.248 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:15.506 { 00:07:15.506 "nbd_device": "/dev/nbd0", 00:07:15.506 "bdev_name": "raid" 00:07:15.506 } 00:07:15.506 ]' 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:15.506 { 00:07:15.506 "nbd_device": "/dev/nbd0", 00:07:15.506 "bdev_name": "raid" 00:07:15.506 } 00:07:15.506 ]' 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:15.506 4096+0 records in 00:07:15.506 4096+0 records out 00:07:15.506 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0331752 s, 63.2 MB/s 00:07:15.506 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:15.765 4096+0 records in 00:07:15.765 4096+0 records out 00:07:15.765 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.223564 s, 9.4 MB/s 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:15.765 128+0 records in 00:07:15.765 128+0 records out 00:07:15.765 65536 bytes (66 kB, 64 KiB) copied, 0.0011841 s, 55.3 MB/s 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:15.765 2035+0 records in 00:07:15.765 2035+0 records out 00:07:15.765 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0152449 s, 68.3 MB/s 00:07:15.765 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:16.023 456+0 records in 00:07:16.023 456+0 records out 00:07:16.023 233472 bytes (233 kB, 228 KiB) copied, 0.00383171 s, 60.9 MB/s 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.023 02:40:26 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:16.023 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.023 [2024-12-07 02:40:27.094492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.024 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.024 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.024 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.024 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.024 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.282 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:16.282 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.282 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:16.282 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:16.282 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:16.282 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:16.282 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:16.282 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.283 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71941 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 71941 ']' 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 71941 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71941 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71941' 00:07:16.541 killing process with pid 71941 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 71941 00:07:16.541 [2024-12-07 02:40:27.414002] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.541 02:40:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 71941 00:07:16.541 [2024-12-07 02:40:27.414141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.541 [2024-12-07 02:40:27.414206] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.541 [2024-12-07 02:40:27.414220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:07:16.541 [2024-12-07 02:40:27.455571] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.800 02:40:27 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:16.800 00:07:16.800 real 0m2.838s 00:07:16.800 user 0m3.341s 00:07:16.800 sys 0m1.018s 00:07:16.800 ************************************ 00:07:16.800 END TEST raid_function_test_raid0 00:07:16.800 ************************************ 00:07:16.800 02:40:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.800 02:40:27 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:17.060 02:40:27 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:17.060 02:40:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:17.060 02:40:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.060 02:40:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.060 ************************************ 00:07:17.060 START TEST raid_function_test_concat 00:07:17.060 ************************************ 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=72056 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 72056' 00:07:17.060 Process raid pid: 72056 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 72056 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 72056 ']' 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.060 02:40:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:17.060 [2024-12-07 02:40:27.977800] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:17.060 [2024-12-07 02:40:27.978030] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.321 [2024-12-07 02:40:28.138053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.321 [2024-12-07 02:40:28.205649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.321 [2024-12-07 02:40:28.282991] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.321 [2024-12-07 02:40:28.283153] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:17.891 Base_1 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:17.891 Base_2 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:17.891 [2024-12-07 02:40:28.850919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:17.891 [2024-12-07 02:40:28.853114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:17.891 [2024-12-07 02:40:28.853187] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:17.891 [2024-12-07 02:40:28.853199] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:17.891 [2024-12-07 02:40:28.853482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:17.891 [2024-12-07 02:40:28.853622] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:17.891 [2024-12-07 02:40:28.853632] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000006280 00:07:17.891 [2024-12-07 02:40:28.853783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:17.891 02:40:28 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:18.151 [2024-12-07 02:40:29.090496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:18.151 /dev/nbd0 00:07:18.151 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:18.151 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:18.151 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:18.151 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:18.151 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.151 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.151 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:18.151 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:18.151 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.151 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.151 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.151 1+0 records in 00:07:18.151 1+0 records out 00:07:18.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522127 s, 7.8 MB/s 00:07:18.152 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.152 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:18.152 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.152 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.152 02:40:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:18.152 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.152 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:18.152 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:18.152 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:18.152 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.412 { 00:07:18.412 "nbd_device": "/dev/nbd0", 00:07:18.412 "bdev_name": "raid" 00:07:18.412 } 00:07:18.412 ]' 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.412 { 00:07:18.412 "nbd_device": "/dev/nbd0", 00:07:18.412 "bdev_name": "raid" 00:07:18.412 } 00:07:18.412 ]' 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:18.412 4096+0 records in 00:07:18.412 4096+0 records out 00:07:18.412 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0256848 s, 81.6 MB/s 00:07:18.412 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:18.672 4096+0 records in 00:07:18.672 4096+0 records out 00:07:18.672 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.193107 s, 10.9 MB/s 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:18.672 128+0 records in 00:07:18.672 128+0 records out 00:07:18.672 65536 bytes (66 kB, 64 KiB) copied, 0.00111414 s, 58.8 MB/s 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:18.672 2035+0 records in 00:07:18.672 2035+0 records out 00:07:18.672 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0130083 s, 80.1 MB/s 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:18.672 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:18.932 456+0 records in 00:07:18.932 456+0 records out 00:07:18.932 233472 bytes (233 kB, 228 KiB) copied, 0.0035606 s, 65.6 MB/s 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.932 [2024-12-07 02:40:29.987502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:18.932 02:40:29 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 72056 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 72056 ']' 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 72056 00:07:19.192 02:40:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:19.452 02:40:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.452 02:40:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72056 00:07:19.452 killing process with pid 72056 00:07:19.452 02:40:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.452 02:40:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.452 02:40:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72056' 00:07:19.452 02:40:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 72056 00:07:19.452 [2024-12-07 02:40:30.308072] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.452 [2024-12-07 02:40:30.308204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.452 02:40:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 72056 00:07:19.452 [2024-12-07 02:40:30.308276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.452 [2024-12-07 02:40:30.308297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid, state offline 00:07:19.452 [2024-12-07 02:40:30.350273] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.712 ************************************ 00:07:19.712 END TEST raid_function_test_concat 00:07:19.712 ************************************ 00:07:19.712 02:40:30 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:19.712 00:07:19.712 real 0m2.833s 00:07:19.712 user 0m3.356s 00:07:19.712 sys 0m0.982s 00:07:19.712 02:40:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.712 02:40:30 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:19.972 02:40:30 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:19.972 02:40:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:19.972 02:40:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.972 02:40:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.972 ************************************ 00:07:19.972 START TEST raid0_resize_test 00:07:19.972 ************************************ 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72173 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:19.972 Process raid pid: 72173 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72173' 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72173 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72173 ']' 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.972 02:40:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.972 [2024-12-07 02:40:30.888617] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:19.972 [2024-12-07 02:40:30.888851] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.232 [2024-12-07 02:40:31.049996] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.232 [2024-12-07 02:40:31.118369] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.232 [2024-12-07 02:40:31.193704] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.232 [2024-12-07 02:40:31.193837] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.802 Base_1 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.802 Base_2 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.802 [2024-12-07 02:40:31.740882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:20.802 [2024-12-07 02:40:31.742830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:20.802 [2024-12-07 02:40:31.742950] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:20.802 [2024-12-07 02:40:31.742965] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:20.802 [2024-12-07 02:40:31.743216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:20.802 [2024-12-07 02:40:31.743320] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:20.802 [2024-12-07 02:40:31.743329] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:20.802 [2024-12-07 02:40:31.743432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.802 [2024-12-07 02:40:31.752813] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:20.802 [2024-12-07 02:40:31.752845] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:20.802 true 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.802 [2024-12-07 02:40:31.768992] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.802 [2024-12-07 02:40:31.816714] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:20.802 [2024-12-07 02:40:31.816771] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:20.802 [2024-12-07 02:40:31.816826] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:20.802 true 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.802 [2024-12-07 02:40:31.832869] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72173 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72173 ']' 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 72173 00:07:20.802 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:21.062 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.062 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72173 00:07:21.062 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.062 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.062 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72173' 00:07:21.062 killing process with pid 72173 00:07:21.062 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 72173 00:07:21.062 [2024-12-07 02:40:31.918404] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.062 [2024-12-07 02:40:31.918528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.062 [2024-12-07 02:40:31.918617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.062 02:40:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 72173 00:07:21.062 [2024-12-07 02:40:31.918670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:21.062 [2024-12-07 02:40:31.920719] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.322 02:40:32 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:21.322 00:07:21.322 real 0m1.487s 00:07:21.322 user 0m1.563s 00:07:21.322 sys 0m0.399s 00:07:21.322 02:40:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.322 02:40:32 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.322 ************************************ 00:07:21.322 END TEST raid0_resize_test 00:07:21.322 ************************************ 00:07:21.322 02:40:32 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:21.322 02:40:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:21.322 02:40:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.322 02:40:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.322 ************************************ 00:07:21.322 START TEST raid1_resize_test 00:07:21.322 ************************************ 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=72222 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 72222' 00:07:21.322 Process raid pid: 72222 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 72222 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 72222 ']' 00:07:21.322 02:40:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.323 02:40:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.323 02:40:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.323 02:40:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.323 02:40:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.582 [2024-12-07 02:40:32.448169] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:21.583 [2024-12-07 02:40:32.448307] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.583 [2024-12-07 02:40:32.609968] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.842 [2024-12-07 02:40:32.679171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.842 [2024-12-07 02:40:32.755132] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.842 [2024-12-07 02:40:32.755172] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.416 Base_1 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.416 Base_2 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.416 [2024-12-07 02:40:33.301981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:22.416 [2024-12-07 02:40:33.304064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:22.416 [2024-12-07 02:40:33.304126] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:22.416 [2024-12-07 02:40:33.304137] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:22.416 [2024-12-07 02:40:33.304415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:07:22.416 [2024-12-07 02:40:33.304542] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:22.416 [2024-12-07 02:40:33.304552] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000006280 00:07:22.416 [2024-12-07 02:40:33.304711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.416 [2024-12-07 02:40:33.313909] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:22.416 [2024-12-07 02:40:33.313935] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:22.416 true 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.416 [2024-12-07 02:40:33.330071] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.416 [2024-12-07 02:40:33.377798] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:22.416 [2024-12-07 02:40:33.377822] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:22.416 [2024-12-07 02:40:33.377845] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:22.416 true 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.416 [2024-12-07 02:40:33.393944] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 72222 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 72222 ']' 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 72222 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72222 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.416 killing process with pid 72222 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72222' 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 72222 00:07:22.416 [2024-12-07 02:40:33.462209] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.416 [2024-12-07 02:40:33.462310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.416 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 72222 00:07:22.416 [2024-12-07 02:40:33.462745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.416 [2024-12-07 02:40:33.462775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Raid, state offline 00:07:22.416 [2024-12-07 02:40:33.464528] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.987 02:40:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:22.987 00:07:22.987 real 0m1.471s 00:07:22.987 user 0m1.522s 00:07:22.987 sys 0m0.404s 00:07:22.987 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.987 02:40:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.987 ************************************ 00:07:22.987 END TEST raid1_resize_test 00:07:22.987 ************************************ 00:07:22.987 02:40:33 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:22.988 02:40:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:22.988 02:40:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:22.988 02:40:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:22.988 02:40:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.988 02:40:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.988 ************************************ 00:07:22.988 START TEST raid_state_function_test 00:07:22.988 ************************************ 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72275 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:22.988 Process raid pid: 72275 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72275' 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72275 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72275 ']' 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.988 02:40:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.988 [2024-12-07 02:40:33.998165] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:22.988 [2024-12-07 02:40:33.998317] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.248 [2024-12-07 02:40:34.161913] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.248 [2024-12-07 02:40:34.242631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.248 [2024-12-07 02:40:34.319087] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.248 [2024-12-07 02:40:34.319127] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.842 02:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.843 [2024-12-07 02:40:34.826398] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:23.843 [2024-12-07 02:40:34.826459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:23.843 [2024-12-07 02:40:34.826472] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.843 [2024-12-07 02:40:34.826482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.843 "name": "Existed_Raid", 00:07:23.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.843 "strip_size_kb": 64, 00:07:23.843 "state": "configuring", 00:07:23.843 "raid_level": "raid0", 00:07:23.843 "superblock": false, 00:07:23.843 "num_base_bdevs": 2, 00:07:23.843 "num_base_bdevs_discovered": 0, 00:07:23.843 "num_base_bdevs_operational": 2, 00:07:23.843 "base_bdevs_list": [ 00:07:23.843 { 00:07:23.843 "name": "BaseBdev1", 00:07:23.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.843 "is_configured": false, 00:07:23.843 "data_offset": 0, 00:07:23.843 "data_size": 0 00:07:23.843 }, 00:07:23.843 { 00:07:23.843 "name": "BaseBdev2", 00:07:23.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.843 "is_configured": false, 00:07:23.843 "data_offset": 0, 00:07:23.843 "data_size": 0 00:07:23.843 } 00:07:23.843 ] 00:07:23.843 }' 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.843 02:40:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.412 [2024-12-07 02:40:35.261520] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:24.412 [2024-12-07 02:40:35.261576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.412 [2024-12-07 02:40:35.273540] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.412 [2024-12-07 02:40:35.273599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.412 [2024-12-07 02:40:35.273608] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.412 [2024-12-07 02:40:35.273619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.412 [2024-12-07 02:40:35.300723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.412 BaseBdev1 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.412 [ 00:07:24.412 { 00:07:24.412 "name": "BaseBdev1", 00:07:24.412 "aliases": [ 00:07:24.412 "676e868c-ceb8-41a2-8cdf-b4d30cfe4116" 00:07:24.412 ], 00:07:24.412 "product_name": "Malloc disk", 00:07:24.412 "block_size": 512, 00:07:24.412 "num_blocks": 65536, 00:07:24.412 "uuid": "676e868c-ceb8-41a2-8cdf-b4d30cfe4116", 00:07:24.412 "assigned_rate_limits": { 00:07:24.412 "rw_ios_per_sec": 0, 00:07:24.412 "rw_mbytes_per_sec": 0, 00:07:24.412 "r_mbytes_per_sec": 0, 00:07:24.412 "w_mbytes_per_sec": 0 00:07:24.412 }, 00:07:24.412 "claimed": true, 00:07:24.412 "claim_type": "exclusive_write", 00:07:24.412 "zoned": false, 00:07:24.412 "supported_io_types": { 00:07:24.412 "read": true, 00:07:24.412 "write": true, 00:07:24.412 "unmap": true, 00:07:24.412 "flush": true, 00:07:24.412 "reset": true, 00:07:24.412 "nvme_admin": false, 00:07:24.412 "nvme_io": false, 00:07:24.412 "nvme_io_md": false, 00:07:24.412 "write_zeroes": true, 00:07:24.412 "zcopy": true, 00:07:24.412 "get_zone_info": false, 00:07:24.412 "zone_management": false, 00:07:24.412 "zone_append": false, 00:07:24.412 "compare": false, 00:07:24.412 "compare_and_write": false, 00:07:24.412 "abort": true, 00:07:24.412 "seek_hole": false, 00:07:24.412 "seek_data": false, 00:07:24.412 "copy": true, 00:07:24.412 "nvme_iov_md": false 00:07:24.412 }, 00:07:24.412 "memory_domains": [ 00:07:24.412 { 00:07:24.412 "dma_device_id": "system", 00:07:24.412 "dma_device_type": 1 00:07:24.412 }, 00:07:24.412 { 00:07:24.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.412 "dma_device_type": 2 00:07:24.412 } 00:07:24.412 ], 00:07:24.412 "driver_specific": {} 00:07:24.412 } 00:07:24.412 ] 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.412 "name": "Existed_Raid", 00:07:24.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.412 "strip_size_kb": 64, 00:07:24.412 "state": "configuring", 00:07:24.412 "raid_level": "raid0", 00:07:24.412 "superblock": false, 00:07:24.412 "num_base_bdevs": 2, 00:07:24.412 "num_base_bdevs_discovered": 1, 00:07:24.412 "num_base_bdevs_operational": 2, 00:07:24.412 "base_bdevs_list": [ 00:07:24.412 { 00:07:24.412 "name": "BaseBdev1", 00:07:24.412 "uuid": "676e868c-ceb8-41a2-8cdf-b4d30cfe4116", 00:07:24.412 "is_configured": true, 00:07:24.412 "data_offset": 0, 00:07:24.412 "data_size": 65536 00:07:24.412 }, 00:07:24.412 { 00:07:24.412 "name": "BaseBdev2", 00:07:24.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.412 "is_configured": false, 00:07:24.412 "data_offset": 0, 00:07:24.412 "data_size": 0 00:07:24.412 } 00:07:24.412 ] 00:07:24.412 }' 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.412 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.982 [2024-12-07 02:40:35.807876] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:24.982 [2024-12-07 02:40:35.807929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.982 [2024-12-07 02:40:35.815883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.982 [2024-12-07 02:40:35.817986] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.982 [2024-12-07 02:40:35.818024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.982 "name": "Existed_Raid", 00:07:24.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.982 "strip_size_kb": 64, 00:07:24.982 "state": "configuring", 00:07:24.982 "raid_level": "raid0", 00:07:24.982 "superblock": false, 00:07:24.982 "num_base_bdevs": 2, 00:07:24.982 "num_base_bdevs_discovered": 1, 00:07:24.982 "num_base_bdevs_operational": 2, 00:07:24.982 "base_bdevs_list": [ 00:07:24.982 { 00:07:24.982 "name": "BaseBdev1", 00:07:24.982 "uuid": "676e868c-ceb8-41a2-8cdf-b4d30cfe4116", 00:07:24.982 "is_configured": true, 00:07:24.982 "data_offset": 0, 00:07:24.982 "data_size": 65536 00:07:24.982 }, 00:07:24.982 { 00:07:24.982 "name": "BaseBdev2", 00:07:24.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.982 "is_configured": false, 00:07:24.982 "data_offset": 0, 00:07:24.982 "data_size": 0 00:07:24.982 } 00:07:24.982 ] 00:07:24.982 }' 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.982 02:40:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.242 [2024-12-07 02:40:36.279863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:25.242 [2024-12-07 02:40:36.279959] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:25.242 [2024-12-07 02:40:36.279982] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:25.242 [2024-12-07 02:40:36.280694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:25.242 [2024-12-07 02:40:36.281019] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:25.242 [2024-12-07 02:40:36.281067] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:25.242 [2024-12-07 02:40:36.281519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.242 BaseBdev2 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.242 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.242 [ 00:07:25.242 { 00:07:25.242 "name": "BaseBdev2", 00:07:25.242 "aliases": [ 00:07:25.242 "ae2b5e20-e759-41fd-8e3a-abbaaec5b9f0" 00:07:25.242 ], 00:07:25.242 "product_name": "Malloc disk", 00:07:25.242 "block_size": 512, 00:07:25.242 "num_blocks": 65536, 00:07:25.242 "uuid": "ae2b5e20-e759-41fd-8e3a-abbaaec5b9f0", 00:07:25.242 "assigned_rate_limits": { 00:07:25.242 "rw_ios_per_sec": 0, 00:07:25.242 "rw_mbytes_per_sec": 0, 00:07:25.242 "r_mbytes_per_sec": 0, 00:07:25.242 "w_mbytes_per_sec": 0 00:07:25.242 }, 00:07:25.242 "claimed": true, 00:07:25.242 "claim_type": "exclusive_write", 00:07:25.242 "zoned": false, 00:07:25.242 "supported_io_types": { 00:07:25.242 "read": true, 00:07:25.242 "write": true, 00:07:25.242 "unmap": true, 00:07:25.242 "flush": true, 00:07:25.242 "reset": true, 00:07:25.242 "nvme_admin": false, 00:07:25.242 "nvme_io": false, 00:07:25.242 "nvme_io_md": false, 00:07:25.242 "write_zeroes": true, 00:07:25.242 "zcopy": true, 00:07:25.242 "get_zone_info": false, 00:07:25.242 "zone_management": false, 00:07:25.242 "zone_append": false, 00:07:25.242 "compare": false, 00:07:25.242 "compare_and_write": false, 00:07:25.242 "abort": true, 00:07:25.242 "seek_hole": false, 00:07:25.242 "seek_data": false, 00:07:25.242 "copy": true, 00:07:25.242 "nvme_iov_md": false 00:07:25.242 }, 00:07:25.242 "memory_domains": [ 00:07:25.242 { 00:07:25.242 "dma_device_id": "system", 00:07:25.242 "dma_device_type": 1 00:07:25.242 }, 00:07:25.242 { 00:07:25.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.242 "dma_device_type": 2 00:07:25.242 } 00:07:25.242 ], 00:07:25.242 "driver_specific": {} 00:07:25.242 } 00:07:25.242 ] 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.502 "name": "Existed_Raid", 00:07:25.502 "uuid": "d902f2f5-cb5a-4994-a17e-8d2e6509f831", 00:07:25.502 "strip_size_kb": 64, 00:07:25.502 "state": "online", 00:07:25.502 "raid_level": "raid0", 00:07:25.502 "superblock": false, 00:07:25.502 "num_base_bdevs": 2, 00:07:25.502 "num_base_bdevs_discovered": 2, 00:07:25.502 "num_base_bdevs_operational": 2, 00:07:25.502 "base_bdevs_list": [ 00:07:25.502 { 00:07:25.502 "name": "BaseBdev1", 00:07:25.502 "uuid": "676e868c-ceb8-41a2-8cdf-b4d30cfe4116", 00:07:25.502 "is_configured": true, 00:07:25.502 "data_offset": 0, 00:07:25.502 "data_size": 65536 00:07:25.502 }, 00:07:25.502 { 00:07:25.502 "name": "BaseBdev2", 00:07:25.502 "uuid": "ae2b5e20-e759-41fd-8e3a-abbaaec5b9f0", 00:07:25.502 "is_configured": true, 00:07:25.502 "data_offset": 0, 00:07:25.502 "data_size": 65536 00:07:25.502 } 00:07:25.502 ] 00:07:25.502 }' 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.502 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.762 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:25.762 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:25.762 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:25.762 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:25.762 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:25.762 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:25.762 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:25.762 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.762 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.762 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:25.762 [2024-12-07 02:40:36.763680] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.762 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.762 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:25.762 "name": "Existed_Raid", 00:07:25.762 "aliases": [ 00:07:25.762 "d902f2f5-cb5a-4994-a17e-8d2e6509f831" 00:07:25.762 ], 00:07:25.762 "product_name": "Raid Volume", 00:07:25.762 "block_size": 512, 00:07:25.762 "num_blocks": 131072, 00:07:25.762 "uuid": "d902f2f5-cb5a-4994-a17e-8d2e6509f831", 00:07:25.762 "assigned_rate_limits": { 00:07:25.762 "rw_ios_per_sec": 0, 00:07:25.762 "rw_mbytes_per_sec": 0, 00:07:25.762 "r_mbytes_per_sec": 0, 00:07:25.762 "w_mbytes_per_sec": 0 00:07:25.762 }, 00:07:25.762 "claimed": false, 00:07:25.762 "zoned": false, 00:07:25.762 "supported_io_types": { 00:07:25.762 "read": true, 00:07:25.762 "write": true, 00:07:25.763 "unmap": true, 00:07:25.763 "flush": true, 00:07:25.763 "reset": true, 00:07:25.763 "nvme_admin": false, 00:07:25.763 "nvme_io": false, 00:07:25.763 "nvme_io_md": false, 00:07:25.763 "write_zeroes": true, 00:07:25.763 "zcopy": false, 00:07:25.763 "get_zone_info": false, 00:07:25.763 "zone_management": false, 00:07:25.763 "zone_append": false, 00:07:25.763 "compare": false, 00:07:25.763 "compare_and_write": false, 00:07:25.763 "abort": false, 00:07:25.763 "seek_hole": false, 00:07:25.763 "seek_data": false, 00:07:25.763 "copy": false, 00:07:25.763 "nvme_iov_md": false 00:07:25.763 }, 00:07:25.763 "memory_domains": [ 00:07:25.763 { 00:07:25.763 "dma_device_id": "system", 00:07:25.763 "dma_device_type": 1 00:07:25.763 }, 00:07:25.763 { 00:07:25.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.763 "dma_device_type": 2 00:07:25.763 }, 00:07:25.763 { 00:07:25.763 "dma_device_id": "system", 00:07:25.763 "dma_device_type": 1 00:07:25.763 }, 00:07:25.763 { 00:07:25.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.763 "dma_device_type": 2 00:07:25.763 } 00:07:25.763 ], 00:07:25.763 "driver_specific": { 00:07:25.763 "raid": { 00:07:25.763 "uuid": "d902f2f5-cb5a-4994-a17e-8d2e6509f831", 00:07:25.763 "strip_size_kb": 64, 00:07:25.763 "state": "online", 00:07:25.763 "raid_level": "raid0", 00:07:25.763 "superblock": false, 00:07:25.763 "num_base_bdevs": 2, 00:07:25.763 "num_base_bdevs_discovered": 2, 00:07:25.763 "num_base_bdevs_operational": 2, 00:07:25.763 "base_bdevs_list": [ 00:07:25.763 { 00:07:25.763 "name": "BaseBdev1", 00:07:25.763 "uuid": "676e868c-ceb8-41a2-8cdf-b4d30cfe4116", 00:07:25.763 "is_configured": true, 00:07:25.763 "data_offset": 0, 00:07:25.763 "data_size": 65536 00:07:25.763 }, 00:07:25.763 { 00:07:25.763 "name": "BaseBdev2", 00:07:25.763 "uuid": "ae2b5e20-e759-41fd-8e3a-abbaaec5b9f0", 00:07:25.763 "is_configured": true, 00:07:25.763 "data_offset": 0, 00:07:25.763 "data_size": 65536 00:07:25.763 } 00:07:25.763 ] 00:07:25.763 } 00:07:25.763 } 00:07:25.763 }' 00:07:25.763 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:26.023 BaseBdev2' 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.023 02:40:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.023 [2024-12-07 02:40:36.982999] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:26.023 [2024-12-07 02:40:36.983030] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.023 [2024-12-07 02:40:36.983085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.023 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.023 "name": "Existed_Raid", 00:07:26.023 "uuid": "d902f2f5-cb5a-4994-a17e-8d2e6509f831", 00:07:26.023 "strip_size_kb": 64, 00:07:26.023 "state": "offline", 00:07:26.023 "raid_level": "raid0", 00:07:26.023 "superblock": false, 00:07:26.023 "num_base_bdevs": 2, 00:07:26.023 "num_base_bdevs_discovered": 1, 00:07:26.023 "num_base_bdevs_operational": 1, 00:07:26.023 "base_bdevs_list": [ 00:07:26.023 { 00:07:26.023 "name": null, 00:07:26.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.023 "is_configured": false, 00:07:26.023 "data_offset": 0, 00:07:26.023 "data_size": 65536 00:07:26.023 }, 00:07:26.023 { 00:07:26.023 "name": "BaseBdev2", 00:07:26.023 "uuid": "ae2b5e20-e759-41fd-8e3a-abbaaec5b9f0", 00:07:26.023 "is_configured": true, 00:07:26.023 "data_offset": 0, 00:07:26.024 "data_size": 65536 00:07:26.024 } 00:07:26.024 ] 00:07:26.024 }' 00:07:26.024 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.024 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.594 [2024-12-07 02:40:37.502690] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:26.594 [2024-12-07 02:40:37.502787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72275 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72275 ']' 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72275 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72275 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.594 killing process with pid 72275 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72275' 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72275 00:07:26.594 [2024-12-07 02:40:37.619805] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.594 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72275 00:07:26.594 [2024-12-07 02:40:37.621374] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.164 02:40:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:27.164 00:07:27.164 real 0m4.089s 00:07:27.164 user 0m6.232s 00:07:27.164 sys 0m0.893s 00:07:27.164 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.164 02:40:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.164 ************************************ 00:07:27.164 END TEST raid_state_function_test 00:07:27.164 ************************************ 00:07:27.165 02:40:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:27.165 02:40:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:27.165 02:40:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.165 02:40:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.165 ************************************ 00:07:27.165 START TEST raid_state_function_test_sb 00:07:27.165 ************************************ 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72517 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:27.165 Process raid pid: 72517 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72517' 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72517 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72517 ']' 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.165 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.165 [2024-12-07 02:40:38.162567] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:27.165 [2024-12-07 02:40:38.162725] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.426 [2024-12-07 02:40:38.326202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.426 [2024-12-07 02:40:38.394821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.426 [2024-12-07 02:40:38.469700] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.426 [2024-12-07 02:40:38.469736] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.997 [2024-12-07 02:40:38.985004] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.997 [2024-12-07 02:40:38.985061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.997 [2024-12-07 02:40:38.985074] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.997 [2024-12-07 02:40:38.985084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.997 02:40:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.997 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.997 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.997 "name": "Existed_Raid", 00:07:27.997 "uuid": "307ea17a-c35d-4a47-b8eb-7fd668198677", 00:07:27.997 "strip_size_kb": 64, 00:07:27.997 "state": "configuring", 00:07:27.997 "raid_level": "raid0", 00:07:27.997 "superblock": true, 00:07:27.997 "num_base_bdevs": 2, 00:07:27.997 "num_base_bdevs_discovered": 0, 00:07:27.997 "num_base_bdevs_operational": 2, 00:07:27.997 "base_bdevs_list": [ 00:07:27.997 { 00:07:27.997 "name": "BaseBdev1", 00:07:27.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.997 "is_configured": false, 00:07:27.997 "data_offset": 0, 00:07:27.997 "data_size": 0 00:07:27.997 }, 00:07:27.997 { 00:07:27.997 "name": "BaseBdev2", 00:07:27.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.997 "is_configured": false, 00:07:27.997 "data_offset": 0, 00:07:27.997 "data_size": 0 00:07:27.997 } 00:07:27.997 ] 00:07:27.997 }' 00:07:27.997 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.997 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.567 [2024-12-07 02:40:39.432115] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.567 [2024-12-07 02:40:39.432170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.567 [2024-12-07 02:40:39.444140] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:28.567 [2024-12-07 02:40:39.444185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:28.567 [2024-12-07 02:40:39.444195] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.567 [2024-12-07 02:40:39.444204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.567 [2024-12-07 02:40:39.471031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.567 BaseBdev1 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.567 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.567 [ 00:07:28.567 { 00:07:28.567 "name": "BaseBdev1", 00:07:28.567 "aliases": [ 00:07:28.567 "74f66d62-a817-44d0-a02e-879b81068d90" 00:07:28.567 ], 00:07:28.567 "product_name": "Malloc disk", 00:07:28.568 "block_size": 512, 00:07:28.568 "num_blocks": 65536, 00:07:28.568 "uuid": "74f66d62-a817-44d0-a02e-879b81068d90", 00:07:28.568 "assigned_rate_limits": { 00:07:28.568 "rw_ios_per_sec": 0, 00:07:28.568 "rw_mbytes_per_sec": 0, 00:07:28.568 "r_mbytes_per_sec": 0, 00:07:28.568 "w_mbytes_per_sec": 0 00:07:28.568 }, 00:07:28.568 "claimed": true, 00:07:28.568 "claim_type": "exclusive_write", 00:07:28.568 "zoned": false, 00:07:28.568 "supported_io_types": { 00:07:28.568 "read": true, 00:07:28.568 "write": true, 00:07:28.568 "unmap": true, 00:07:28.568 "flush": true, 00:07:28.568 "reset": true, 00:07:28.568 "nvme_admin": false, 00:07:28.568 "nvme_io": false, 00:07:28.568 "nvme_io_md": false, 00:07:28.568 "write_zeroes": true, 00:07:28.568 "zcopy": true, 00:07:28.568 "get_zone_info": false, 00:07:28.568 "zone_management": false, 00:07:28.568 "zone_append": false, 00:07:28.568 "compare": false, 00:07:28.568 "compare_and_write": false, 00:07:28.568 "abort": true, 00:07:28.568 "seek_hole": false, 00:07:28.568 "seek_data": false, 00:07:28.568 "copy": true, 00:07:28.568 "nvme_iov_md": false 00:07:28.568 }, 00:07:28.568 "memory_domains": [ 00:07:28.568 { 00:07:28.568 "dma_device_id": "system", 00:07:28.568 "dma_device_type": 1 00:07:28.568 }, 00:07:28.568 { 00:07:28.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.568 "dma_device_type": 2 00:07:28.568 } 00:07:28.568 ], 00:07:28.568 "driver_specific": {} 00:07:28.568 } 00:07:28.568 ] 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.568 "name": "Existed_Raid", 00:07:28.568 "uuid": "acf141a9-8288-4647-aa2c-bac04b40ea5b", 00:07:28.568 "strip_size_kb": 64, 00:07:28.568 "state": "configuring", 00:07:28.568 "raid_level": "raid0", 00:07:28.568 "superblock": true, 00:07:28.568 "num_base_bdevs": 2, 00:07:28.568 "num_base_bdevs_discovered": 1, 00:07:28.568 "num_base_bdevs_operational": 2, 00:07:28.568 "base_bdevs_list": [ 00:07:28.568 { 00:07:28.568 "name": "BaseBdev1", 00:07:28.568 "uuid": "74f66d62-a817-44d0-a02e-879b81068d90", 00:07:28.568 "is_configured": true, 00:07:28.568 "data_offset": 2048, 00:07:28.568 "data_size": 63488 00:07:28.568 }, 00:07:28.568 { 00:07:28.568 "name": "BaseBdev2", 00:07:28.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.568 "is_configured": false, 00:07:28.568 "data_offset": 0, 00:07:28.568 "data_size": 0 00:07:28.568 } 00:07:28.568 ] 00:07:28.568 }' 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.568 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.828 [2024-12-07 02:40:39.878352] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.828 [2024-12-07 02:40:39.878462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.828 [2024-12-07 02:40:39.890373] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.828 [2024-12-07 02:40:39.892521] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.828 [2024-12-07 02:40:39.892617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.828 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.088 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.088 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.088 "name": "Existed_Raid", 00:07:29.088 "uuid": "7b924565-4fcc-4b75-94c8-38222eeae3da", 00:07:29.088 "strip_size_kb": 64, 00:07:29.088 "state": "configuring", 00:07:29.088 "raid_level": "raid0", 00:07:29.088 "superblock": true, 00:07:29.088 "num_base_bdevs": 2, 00:07:29.088 "num_base_bdevs_discovered": 1, 00:07:29.088 "num_base_bdevs_operational": 2, 00:07:29.088 "base_bdevs_list": [ 00:07:29.088 { 00:07:29.088 "name": "BaseBdev1", 00:07:29.088 "uuid": "74f66d62-a817-44d0-a02e-879b81068d90", 00:07:29.088 "is_configured": true, 00:07:29.088 "data_offset": 2048, 00:07:29.088 "data_size": 63488 00:07:29.088 }, 00:07:29.088 { 00:07:29.088 "name": "BaseBdev2", 00:07:29.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.088 "is_configured": false, 00:07:29.088 "data_offset": 0, 00:07:29.088 "data_size": 0 00:07:29.088 } 00:07:29.088 ] 00:07:29.088 }' 00:07:29.088 02:40:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.088 02:40:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.349 [2024-12-07 02:40:40.367990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.349 [2024-12-07 02:40:40.368720] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:29.349 [2024-12-07 02:40:40.368779] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:29.349 BaseBdev2 00:07:29.349 [2024-12-07 02:40:40.369686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.349 [2024-12-07 02:40:40.370120] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:29.349 [2024-12-07 02:40:40.370170] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:29.349 [2024-12-07 02:40:40.370542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.349 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.349 [ 00:07:29.349 { 00:07:29.349 "name": "BaseBdev2", 00:07:29.350 "aliases": [ 00:07:29.350 "e16f33e0-fa2d-4f24-9011-0e83c47c3c16" 00:07:29.350 ], 00:07:29.350 "product_name": "Malloc disk", 00:07:29.350 "block_size": 512, 00:07:29.350 "num_blocks": 65536, 00:07:29.350 "uuid": "e16f33e0-fa2d-4f24-9011-0e83c47c3c16", 00:07:29.350 "assigned_rate_limits": { 00:07:29.350 "rw_ios_per_sec": 0, 00:07:29.350 "rw_mbytes_per_sec": 0, 00:07:29.350 "r_mbytes_per_sec": 0, 00:07:29.350 "w_mbytes_per_sec": 0 00:07:29.350 }, 00:07:29.350 "claimed": true, 00:07:29.350 "claim_type": "exclusive_write", 00:07:29.350 "zoned": false, 00:07:29.350 "supported_io_types": { 00:07:29.350 "read": true, 00:07:29.350 "write": true, 00:07:29.350 "unmap": true, 00:07:29.350 "flush": true, 00:07:29.350 "reset": true, 00:07:29.350 "nvme_admin": false, 00:07:29.350 "nvme_io": false, 00:07:29.350 "nvme_io_md": false, 00:07:29.350 "write_zeroes": true, 00:07:29.350 "zcopy": true, 00:07:29.350 "get_zone_info": false, 00:07:29.350 "zone_management": false, 00:07:29.350 "zone_append": false, 00:07:29.350 "compare": false, 00:07:29.350 "compare_and_write": false, 00:07:29.350 "abort": true, 00:07:29.350 "seek_hole": false, 00:07:29.350 "seek_data": false, 00:07:29.350 "copy": true, 00:07:29.350 "nvme_iov_md": false 00:07:29.350 }, 00:07:29.350 "memory_domains": [ 00:07:29.350 { 00:07:29.350 "dma_device_id": "system", 00:07:29.350 "dma_device_type": 1 00:07:29.350 }, 00:07:29.350 { 00:07:29.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.350 "dma_device_type": 2 00:07:29.350 } 00:07:29.350 ], 00:07:29.350 "driver_specific": {} 00:07:29.350 } 00:07:29.350 ] 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.350 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.611 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.611 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.611 "name": "Existed_Raid", 00:07:29.611 "uuid": "7b924565-4fcc-4b75-94c8-38222eeae3da", 00:07:29.611 "strip_size_kb": 64, 00:07:29.611 "state": "online", 00:07:29.611 "raid_level": "raid0", 00:07:29.611 "superblock": true, 00:07:29.611 "num_base_bdevs": 2, 00:07:29.611 "num_base_bdevs_discovered": 2, 00:07:29.611 "num_base_bdevs_operational": 2, 00:07:29.611 "base_bdevs_list": [ 00:07:29.611 { 00:07:29.611 "name": "BaseBdev1", 00:07:29.611 "uuid": "74f66d62-a817-44d0-a02e-879b81068d90", 00:07:29.611 "is_configured": true, 00:07:29.611 "data_offset": 2048, 00:07:29.611 "data_size": 63488 00:07:29.611 }, 00:07:29.611 { 00:07:29.611 "name": "BaseBdev2", 00:07:29.611 "uuid": "e16f33e0-fa2d-4f24-9011-0e83c47c3c16", 00:07:29.611 "is_configured": true, 00:07:29.611 "data_offset": 2048, 00:07:29.611 "data_size": 63488 00:07:29.611 } 00:07:29.611 ] 00:07:29.611 }' 00:07:29.611 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.611 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.870 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:29.870 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:29.871 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:29.871 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:29.871 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:29.871 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:29.871 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:29.871 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:29.871 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.871 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.871 [2024-12-07 02:40:40.855548] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.871 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.871 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:29.871 "name": "Existed_Raid", 00:07:29.871 "aliases": [ 00:07:29.871 "7b924565-4fcc-4b75-94c8-38222eeae3da" 00:07:29.871 ], 00:07:29.871 "product_name": "Raid Volume", 00:07:29.871 "block_size": 512, 00:07:29.871 "num_blocks": 126976, 00:07:29.871 "uuid": "7b924565-4fcc-4b75-94c8-38222eeae3da", 00:07:29.871 "assigned_rate_limits": { 00:07:29.871 "rw_ios_per_sec": 0, 00:07:29.871 "rw_mbytes_per_sec": 0, 00:07:29.871 "r_mbytes_per_sec": 0, 00:07:29.871 "w_mbytes_per_sec": 0 00:07:29.871 }, 00:07:29.871 "claimed": false, 00:07:29.871 "zoned": false, 00:07:29.871 "supported_io_types": { 00:07:29.871 "read": true, 00:07:29.871 "write": true, 00:07:29.871 "unmap": true, 00:07:29.871 "flush": true, 00:07:29.871 "reset": true, 00:07:29.871 "nvme_admin": false, 00:07:29.871 "nvme_io": false, 00:07:29.871 "nvme_io_md": false, 00:07:29.871 "write_zeroes": true, 00:07:29.871 "zcopy": false, 00:07:29.871 "get_zone_info": false, 00:07:29.871 "zone_management": false, 00:07:29.871 "zone_append": false, 00:07:29.871 "compare": false, 00:07:29.871 "compare_and_write": false, 00:07:29.871 "abort": false, 00:07:29.871 "seek_hole": false, 00:07:29.871 "seek_data": false, 00:07:29.871 "copy": false, 00:07:29.871 "nvme_iov_md": false 00:07:29.871 }, 00:07:29.871 "memory_domains": [ 00:07:29.871 { 00:07:29.871 "dma_device_id": "system", 00:07:29.871 "dma_device_type": 1 00:07:29.871 }, 00:07:29.871 { 00:07:29.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.871 "dma_device_type": 2 00:07:29.871 }, 00:07:29.871 { 00:07:29.871 "dma_device_id": "system", 00:07:29.871 "dma_device_type": 1 00:07:29.871 }, 00:07:29.871 { 00:07:29.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.871 "dma_device_type": 2 00:07:29.871 } 00:07:29.871 ], 00:07:29.871 "driver_specific": { 00:07:29.871 "raid": { 00:07:29.871 "uuid": "7b924565-4fcc-4b75-94c8-38222eeae3da", 00:07:29.871 "strip_size_kb": 64, 00:07:29.871 "state": "online", 00:07:29.871 "raid_level": "raid0", 00:07:29.871 "superblock": true, 00:07:29.871 "num_base_bdevs": 2, 00:07:29.871 "num_base_bdevs_discovered": 2, 00:07:29.871 "num_base_bdevs_operational": 2, 00:07:29.871 "base_bdevs_list": [ 00:07:29.871 { 00:07:29.871 "name": "BaseBdev1", 00:07:29.871 "uuid": "74f66d62-a817-44d0-a02e-879b81068d90", 00:07:29.871 "is_configured": true, 00:07:29.871 "data_offset": 2048, 00:07:29.871 "data_size": 63488 00:07:29.871 }, 00:07:29.871 { 00:07:29.871 "name": "BaseBdev2", 00:07:29.871 "uuid": "e16f33e0-fa2d-4f24-9011-0e83c47c3c16", 00:07:29.871 "is_configured": true, 00:07:29.871 "data_offset": 2048, 00:07:29.871 "data_size": 63488 00:07:29.871 } 00:07:29.871 ] 00:07:29.871 } 00:07:29.871 } 00:07:29.871 }' 00:07:29.871 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:29.871 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:29.871 BaseBdev2' 00:07:29.871 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.131 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:30.132 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.132 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:30.132 02:40:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.132 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.132 02:40:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.132 [2024-12-07 02:40:41.098820] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:30.132 [2024-12-07 02:40:41.098855] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:30.132 [2024-12-07 02:40:41.098934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.132 "name": "Existed_Raid", 00:07:30.132 "uuid": "7b924565-4fcc-4b75-94c8-38222eeae3da", 00:07:30.132 "strip_size_kb": 64, 00:07:30.132 "state": "offline", 00:07:30.132 "raid_level": "raid0", 00:07:30.132 "superblock": true, 00:07:30.132 "num_base_bdevs": 2, 00:07:30.132 "num_base_bdevs_discovered": 1, 00:07:30.132 "num_base_bdevs_operational": 1, 00:07:30.132 "base_bdevs_list": [ 00:07:30.132 { 00:07:30.132 "name": null, 00:07:30.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.132 "is_configured": false, 00:07:30.132 "data_offset": 0, 00:07:30.132 "data_size": 63488 00:07:30.132 }, 00:07:30.132 { 00:07:30.132 "name": "BaseBdev2", 00:07:30.132 "uuid": "e16f33e0-fa2d-4f24-9011-0e83c47c3c16", 00:07:30.132 "is_configured": true, 00:07:30.132 "data_offset": 2048, 00:07:30.132 "data_size": 63488 00:07:30.132 } 00:07:30.132 ] 00:07:30.132 }' 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.132 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.703 [2024-12-07 02:40:41.598919] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:30.703 [2024-12-07 02:40:41.598993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72517 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72517 ']' 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72517 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72517 00:07:30.703 killing process with pid 72517 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72517' 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72517 00:07:30.703 [2024-12-07 02:40:41.708561] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.703 02:40:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72517 00:07:30.703 [2024-12-07 02:40:41.710121] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.273 02:40:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:31.273 00:07:31.273 real 0m4.017s 00:07:31.273 user 0m6.130s 00:07:31.273 sys 0m0.847s 00:07:31.273 02:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.273 02:40:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.273 ************************************ 00:07:31.273 END TEST raid_state_function_test_sb 00:07:31.273 ************************************ 00:07:31.273 02:40:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:31.273 02:40:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:31.273 02:40:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.273 02:40:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.273 ************************************ 00:07:31.273 START TEST raid_superblock_test 00:07:31.273 ************************************ 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:31.273 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:31.274 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:31.274 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:31.274 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:31.274 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72758 00:07:31.274 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:31.274 02:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72758 00:07:31.274 02:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72758 ']' 00:07:31.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.274 02:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.274 02:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.274 02:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.274 02:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.274 02:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.274 [2024-12-07 02:40:42.258951] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:31.274 [2024-12-07 02:40:42.259210] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72758 ] 00:07:31.533 [2024-12-07 02:40:42.424933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.534 [2024-12-07 02:40:42.498160] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.534 [2024-12-07 02:40:42.576957] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.534 [2024-12-07 02:40:42.577015] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.102 malloc1 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.102 [2024-12-07 02:40:43.120782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:32.102 [2024-12-07 02:40:43.120951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.102 [2024-12-07 02:40:43.120992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:32.102 [2024-12-07 02:40:43.121044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.102 [2024-12-07 02:40:43.123460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.102 [2024-12-07 02:40:43.123531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:32.102 pt1 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.102 malloc2 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.102 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.102 [2024-12-07 02:40:43.169010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:32.102 [2024-12-07 02:40:43.169065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.102 [2024-12-07 02:40:43.169082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:32.102 [2024-12-07 02:40:43.169093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.102 [2024-12-07 02:40:43.171463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.103 [2024-12-07 02:40:43.171498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:32.103 pt2 00:07:32.103 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.103 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:32.103 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:32.103 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:32.103 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.103 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.362 [2024-12-07 02:40:43.181055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:32.362 [2024-12-07 02:40:43.183173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:32.362 [2024-12-07 02:40:43.183374] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:32.362 [2024-12-07 02:40:43.183397] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:32.362 [2024-12-07 02:40:43.183687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:32.362 [2024-12-07 02:40:43.183827] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:32.362 [2024-12-07 02:40:43.183838] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:32.362 [2024-12-07 02:40:43.183968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.362 "name": "raid_bdev1", 00:07:32.362 "uuid": "646019ed-0e63-40b1-bd04-aa0d5a72c789", 00:07:32.362 "strip_size_kb": 64, 00:07:32.362 "state": "online", 00:07:32.362 "raid_level": "raid0", 00:07:32.362 "superblock": true, 00:07:32.362 "num_base_bdevs": 2, 00:07:32.362 "num_base_bdevs_discovered": 2, 00:07:32.362 "num_base_bdevs_operational": 2, 00:07:32.362 "base_bdevs_list": [ 00:07:32.362 { 00:07:32.362 "name": "pt1", 00:07:32.362 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.362 "is_configured": true, 00:07:32.362 "data_offset": 2048, 00:07:32.362 "data_size": 63488 00:07:32.362 }, 00:07:32.362 { 00:07:32.362 "name": "pt2", 00:07:32.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.362 "is_configured": true, 00:07:32.362 "data_offset": 2048, 00:07:32.362 "data_size": 63488 00:07:32.362 } 00:07:32.362 ] 00:07:32.362 }' 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.362 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.621 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:32.621 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:32.621 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:32.621 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:32.621 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:32.621 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:32.621 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.621 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:32.621 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.621 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.621 [2024-12-07 02:40:43.596578] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.621 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.621 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:32.621 "name": "raid_bdev1", 00:07:32.621 "aliases": [ 00:07:32.621 "646019ed-0e63-40b1-bd04-aa0d5a72c789" 00:07:32.621 ], 00:07:32.621 "product_name": "Raid Volume", 00:07:32.621 "block_size": 512, 00:07:32.621 "num_blocks": 126976, 00:07:32.621 "uuid": "646019ed-0e63-40b1-bd04-aa0d5a72c789", 00:07:32.621 "assigned_rate_limits": { 00:07:32.621 "rw_ios_per_sec": 0, 00:07:32.621 "rw_mbytes_per_sec": 0, 00:07:32.621 "r_mbytes_per_sec": 0, 00:07:32.621 "w_mbytes_per_sec": 0 00:07:32.621 }, 00:07:32.621 "claimed": false, 00:07:32.621 "zoned": false, 00:07:32.621 "supported_io_types": { 00:07:32.621 "read": true, 00:07:32.621 "write": true, 00:07:32.621 "unmap": true, 00:07:32.621 "flush": true, 00:07:32.621 "reset": true, 00:07:32.621 "nvme_admin": false, 00:07:32.622 "nvme_io": false, 00:07:32.622 "nvme_io_md": false, 00:07:32.622 "write_zeroes": true, 00:07:32.622 "zcopy": false, 00:07:32.622 "get_zone_info": false, 00:07:32.622 "zone_management": false, 00:07:32.622 "zone_append": false, 00:07:32.622 "compare": false, 00:07:32.622 "compare_and_write": false, 00:07:32.622 "abort": false, 00:07:32.622 "seek_hole": false, 00:07:32.622 "seek_data": false, 00:07:32.622 "copy": false, 00:07:32.622 "nvme_iov_md": false 00:07:32.622 }, 00:07:32.622 "memory_domains": [ 00:07:32.622 { 00:07:32.622 "dma_device_id": "system", 00:07:32.622 "dma_device_type": 1 00:07:32.622 }, 00:07:32.622 { 00:07:32.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.622 "dma_device_type": 2 00:07:32.622 }, 00:07:32.622 { 00:07:32.622 "dma_device_id": "system", 00:07:32.622 "dma_device_type": 1 00:07:32.622 }, 00:07:32.622 { 00:07:32.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.622 "dma_device_type": 2 00:07:32.622 } 00:07:32.622 ], 00:07:32.622 "driver_specific": { 00:07:32.622 "raid": { 00:07:32.622 "uuid": "646019ed-0e63-40b1-bd04-aa0d5a72c789", 00:07:32.622 "strip_size_kb": 64, 00:07:32.622 "state": "online", 00:07:32.622 "raid_level": "raid0", 00:07:32.622 "superblock": true, 00:07:32.622 "num_base_bdevs": 2, 00:07:32.622 "num_base_bdevs_discovered": 2, 00:07:32.622 "num_base_bdevs_operational": 2, 00:07:32.622 "base_bdevs_list": [ 00:07:32.622 { 00:07:32.622 "name": "pt1", 00:07:32.622 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.622 "is_configured": true, 00:07:32.622 "data_offset": 2048, 00:07:32.622 "data_size": 63488 00:07:32.622 }, 00:07:32.622 { 00:07:32.622 "name": "pt2", 00:07:32.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.622 "is_configured": true, 00:07:32.622 "data_offset": 2048, 00:07:32.622 "data_size": 63488 00:07:32.622 } 00:07:32.622 ] 00:07:32.622 } 00:07:32.622 } 00:07:32.622 }' 00:07:32.622 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:32.622 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:32.622 pt2' 00:07:32.622 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:32.882 [2024-12-07 02:40:43.848191] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=646019ed-0e63-40b1-bd04-aa0d5a72c789 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 646019ed-0e63-40b1-bd04-aa0d5a72c789 ']' 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.882 [2024-12-07 02:40:43.895831] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:32.882 [2024-12-07 02:40:43.895880] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.882 [2024-12-07 02:40:43.895992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.882 [2024-12-07 02:40:43.896054] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.882 [2024-12-07 02:40:43.896074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.882 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.142 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.142 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:33.142 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:33.142 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.142 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.142 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.142 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:33.142 02:40:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:33.142 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.142 02:40:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.142 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.142 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:33.142 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:33.142 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:33.142 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:33.142 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:33.142 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.142 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:33.142 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:33.142 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:33.142 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.142 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.142 [2024-12-07 02:40:44.031854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:33.142 [2024-12-07 02:40:44.034186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:33.143 [2024-12-07 02:40:44.034321] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:33.143 [2024-12-07 02:40:44.034427] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:33.143 [2024-12-07 02:40:44.034483] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.143 [2024-12-07 02:40:44.034518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:33.143 request: 00:07:33.143 { 00:07:33.143 "name": "raid_bdev1", 00:07:33.143 "raid_level": "raid0", 00:07:33.143 "base_bdevs": [ 00:07:33.143 "malloc1", 00:07:33.143 "malloc2" 00:07:33.143 ], 00:07:33.143 "strip_size_kb": 64, 00:07:33.143 "superblock": false, 00:07:33.143 "method": "bdev_raid_create", 00:07:33.143 "req_id": 1 00:07:33.143 } 00:07:33.143 Got JSON-RPC error response 00:07:33.143 response: 00:07:33.143 { 00:07:33.143 "code": -17, 00:07:33.143 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:33.143 } 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.143 [2024-12-07 02:40:44.099779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:33.143 [2024-12-07 02:40:44.099930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.143 [2024-12-07 02:40:44.099970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:33.143 [2024-12-07 02:40:44.100030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.143 [2024-12-07 02:40:44.102613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.143 [2024-12-07 02:40:44.102682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:33.143 [2024-12-07 02:40:44.102812] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:33.143 [2024-12-07 02:40:44.102896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:33.143 pt1 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.143 "name": "raid_bdev1", 00:07:33.143 "uuid": "646019ed-0e63-40b1-bd04-aa0d5a72c789", 00:07:33.143 "strip_size_kb": 64, 00:07:33.143 "state": "configuring", 00:07:33.143 "raid_level": "raid0", 00:07:33.143 "superblock": true, 00:07:33.143 "num_base_bdevs": 2, 00:07:33.143 "num_base_bdevs_discovered": 1, 00:07:33.143 "num_base_bdevs_operational": 2, 00:07:33.143 "base_bdevs_list": [ 00:07:33.143 { 00:07:33.143 "name": "pt1", 00:07:33.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.143 "is_configured": true, 00:07:33.143 "data_offset": 2048, 00:07:33.143 "data_size": 63488 00:07:33.143 }, 00:07:33.143 { 00:07:33.143 "name": null, 00:07:33.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.143 "is_configured": false, 00:07:33.143 "data_offset": 2048, 00:07:33.143 "data_size": 63488 00:07:33.143 } 00:07:33.143 ] 00:07:33.143 }' 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.143 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.712 [2024-12-07 02:40:44.582938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:33.712 [2024-12-07 02:40:44.583121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:33.712 [2024-12-07 02:40:44.583154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:33.712 [2024-12-07 02:40:44.583164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:33.712 [2024-12-07 02:40:44.583718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:33.712 [2024-12-07 02:40:44.583739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:33.712 [2024-12-07 02:40:44.583836] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:33.712 [2024-12-07 02:40:44.583864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:33.712 [2024-12-07 02:40:44.583964] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:33.712 [2024-12-07 02:40:44.583972] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.712 [2024-12-07 02:40:44.584228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:33.712 [2024-12-07 02:40:44.584354] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:33.712 [2024-12-07 02:40:44.584378] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:33.712 [2024-12-07 02:40:44.584494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.712 pt2 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.712 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.712 "name": "raid_bdev1", 00:07:33.712 "uuid": "646019ed-0e63-40b1-bd04-aa0d5a72c789", 00:07:33.712 "strip_size_kb": 64, 00:07:33.712 "state": "online", 00:07:33.712 "raid_level": "raid0", 00:07:33.712 "superblock": true, 00:07:33.712 "num_base_bdevs": 2, 00:07:33.712 "num_base_bdevs_discovered": 2, 00:07:33.712 "num_base_bdevs_operational": 2, 00:07:33.712 "base_bdevs_list": [ 00:07:33.712 { 00:07:33.712 "name": "pt1", 00:07:33.713 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.713 "is_configured": true, 00:07:33.713 "data_offset": 2048, 00:07:33.713 "data_size": 63488 00:07:33.713 }, 00:07:33.713 { 00:07:33.713 "name": "pt2", 00:07:33.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.713 "is_configured": true, 00:07:33.713 "data_offset": 2048, 00:07:33.713 "data_size": 63488 00:07:33.713 } 00:07:33.713 ] 00:07:33.713 }' 00:07:33.713 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.713 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.972 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:33.972 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:33.972 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:33.972 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:33.972 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:33.972 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:33.972 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:33.972 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.972 02:40:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.972 02:40:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:33.972 [2024-12-07 02:40:44.994516] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.972 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.972 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:33.972 "name": "raid_bdev1", 00:07:33.972 "aliases": [ 00:07:33.972 "646019ed-0e63-40b1-bd04-aa0d5a72c789" 00:07:33.972 ], 00:07:33.972 "product_name": "Raid Volume", 00:07:33.972 "block_size": 512, 00:07:33.972 "num_blocks": 126976, 00:07:33.972 "uuid": "646019ed-0e63-40b1-bd04-aa0d5a72c789", 00:07:33.972 "assigned_rate_limits": { 00:07:33.972 "rw_ios_per_sec": 0, 00:07:33.972 "rw_mbytes_per_sec": 0, 00:07:33.972 "r_mbytes_per_sec": 0, 00:07:33.972 "w_mbytes_per_sec": 0 00:07:33.972 }, 00:07:33.972 "claimed": false, 00:07:33.972 "zoned": false, 00:07:33.972 "supported_io_types": { 00:07:33.972 "read": true, 00:07:33.972 "write": true, 00:07:33.972 "unmap": true, 00:07:33.972 "flush": true, 00:07:33.972 "reset": true, 00:07:33.972 "nvme_admin": false, 00:07:33.972 "nvme_io": false, 00:07:33.972 "nvme_io_md": false, 00:07:33.972 "write_zeroes": true, 00:07:33.972 "zcopy": false, 00:07:33.972 "get_zone_info": false, 00:07:33.972 "zone_management": false, 00:07:33.972 "zone_append": false, 00:07:33.972 "compare": false, 00:07:33.972 "compare_and_write": false, 00:07:33.972 "abort": false, 00:07:33.972 "seek_hole": false, 00:07:33.972 "seek_data": false, 00:07:33.972 "copy": false, 00:07:33.972 "nvme_iov_md": false 00:07:33.972 }, 00:07:33.972 "memory_domains": [ 00:07:33.972 { 00:07:33.972 "dma_device_id": "system", 00:07:33.972 "dma_device_type": 1 00:07:33.972 }, 00:07:33.972 { 00:07:33.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.972 "dma_device_type": 2 00:07:33.972 }, 00:07:33.972 { 00:07:33.972 "dma_device_id": "system", 00:07:33.972 "dma_device_type": 1 00:07:33.972 }, 00:07:33.972 { 00:07:33.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.972 "dma_device_type": 2 00:07:33.972 } 00:07:33.972 ], 00:07:33.972 "driver_specific": { 00:07:33.972 "raid": { 00:07:33.972 "uuid": "646019ed-0e63-40b1-bd04-aa0d5a72c789", 00:07:33.972 "strip_size_kb": 64, 00:07:33.972 "state": "online", 00:07:33.972 "raid_level": "raid0", 00:07:33.972 "superblock": true, 00:07:33.972 "num_base_bdevs": 2, 00:07:33.972 "num_base_bdevs_discovered": 2, 00:07:33.973 "num_base_bdevs_operational": 2, 00:07:33.973 "base_bdevs_list": [ 00:07:33.973 { 00:07:33.973 "name": "pt1", 00:07:33.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.973 "is_configured": true, 00:07:33.973 "data_offset": 2048, 00:07:33.973 "data_size": 63488 00:07:33.973 }, 00:07:33.973 { 00:07:33.973 "name": "pt2", 00:07:33.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.973 "is_configured": true, 00:07:33.973 "data_offset": 2048, 00:07:33.973 "data_size": 63488 00:07:33.973 } 00:07:33.973 ] 00:07:33.973 } 00:07:33.973 } 00:07:33.973 }' 00:07:33.973 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:34.232 pt2' 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:34.232 [2024-12-07 02:40:45.214121] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 646019ed-0e63-40b1-bd04-aa0d5a72c789 '!=' 646019ed-0e63-40b1-bd04-aa0d5a72c789 ']' 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72758 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72758 ']' 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72758 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72758 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72758' 00:07:34.232 killing process with pid 72758 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72758 00:07:34.232 [2024-12-07 02:40:45.291241] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.232 [2024-12-07 02:40:45.291389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.232 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72758 00:07:34.232 [2024-12-07 02:40:45.291473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.232 [2024-12-07 02:40:45.291485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:34.491 [2024-12-07 02:40:45.334551] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.750 02:40:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:34.750 00:07:34.750 real 0m3.550s 00:07:34.750 user 0m5.241s 00:07:34.750 sys 0m0.830s 00:07:34.750 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.750 02:40:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.750 ************************************ 00:07:34.750 END TEST raid_superblock_test 00:07:34.750 ************************************ 00:07:34.750 02:40:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:34.750 02:40:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:34.750 02:40:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.750 02:40:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.750 ************************************ 00:07:34.750 START TEST raid_read_error_test 00:07:34.750 ************************************ 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oSvAyaFJ9u 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72953 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72953 00:07:34.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72953 ']' 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.750 02:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.009 [2024-12-07 02:40:45.893359] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:35.009 [2024-12-07 02:40:45.893507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72953 ] 00:07:35.009 [2024-12-07 02:40:46.057176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.267 [2024-12-07 02:40:46.138024] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.267 [2024-12-07 02:40:46.217053] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.267 [2024-12-07 02:40:46.217094] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.889 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.889 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:35.889 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.889 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.890 BaseBdev1_malloc 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.890 true 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.890 [2024-12-07 02:40:46.748977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.890 [2024-12-07 02:40:46.749061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.890 [2024-12-07 02:40:46.749082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:35.890 [2024-12-07 02:40:46.749098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.890 [2024-12-07 02:40:46.751505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.890 [2024-12-07 02:40:46.751648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.890 BaseBdev1 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.890 BaseBdev2_malloc 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.890 true 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.890 [2024-12-07 02:40:46.810688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:35.890 [2024-12-07 02:40:46.810755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.890 [2024-12-07 02:40:46.810786] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:35.890 [2024-12-07 02:40:46.810800] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.890 [2024-12-07 02:40:46.814260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.890 [2024-12-07 02:40:46.814396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:35.890 BaseBdev2 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.890 [2024-12-07 02:40:46.822719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.890 [2024-12-07 02:40:46.825079] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.890 [2024-12-07 02:40:46.825326] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:35.890 [2024-12-07 02:40:46.825344] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.890 [2024-12-07 02:40:46.825655] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:35.890 [2024-12-07 02:40:46.825811] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:35.890 [2024-12-07 02:40:46.825826] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:35.890 [2024-12-07 02:40:46.825964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.890 "name": "raid_bdev1", 00:07:35.890 "uuid": "5280bd4b-27b8-47f0-b76f-74f4670614e7", 00:07:35.890 "strip_size_kb": 64, 00:07:35.890 "state": "online", 00:07:35.890 "raid_level": "raid0", 00:07:35.890 "superblock": true, 00:07:35.890 "num_base_bdevs": 2, 00:07:35.890 "num_base_bdevs_discovered": 2, 00:07:35.890 "num_base_bdevs_operational": 2, 00:07:35.890 "base_bdevs_list": [ 00:07:35.890 { 00:07:35.890 "name": "BaseBdev1", 00:07:35.890 "uuid": "9c0e1313-3202-5abb-a9ac-7a628ce7bccd", 00:07:35.890 "is_configured": true, 00:07:35.890 "data_offset": 2048, 00:07:35.890 "data_size": 63488 00:07:35.890 }, 00:07:35.890 { 00:07:35.890 "name": "BaseBdev2", 00:07:35.890 "uuid": "e121cea3-6e90-52e5-b7d4-4e0a21fe55a7", 00:07:35.890 "is_configured": true, 00:07:35.890 "data_offset": 2048, 00:07:35.890 "data_size": 63488 00:07:35.890 } 00:07:35.890 ] 00:07:35.890 }' 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.890 02:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.458 02:40:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:36.458 02:40:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:36.458 [2024-12-07 02:40:47.354226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.395 "name": "raid_bdev1", 00:07:37.395 "uuid": "5280bd4b-27b8-47f0-b76f-74f4670614e7", 00:07:37.395 "strip_size_kb": 64, 00:07:37.395 "state": "online", 00:07:37.395 "raid_level": "raid0", 00:07:37.395 "superblock": true, 00:07:37.395 "num_base_bdevs": 2, 00:07:37.395 "num_base_bdevs_discovered": 2, 00:07:37.395 "num_base_bdevs_operational": 2, 00:07:37.395 "base_bdevs_list": [ 00:07:37.395 { 00:07:37.395 "name": "BaseBdev1", 00:07:37.395 "uuid": "9c0e1313-3202-5abb-a9ac-7a628ce7bccd", 00:07:37.395 "is_configured": true, 00:07:37.395 "data_offset": 2048, 00:07:37.395 "data_size": 63488 00:07:37.395 }, 00:07:37.395 { 00:07:37.395 "name": "BaseBdev2", 00:07:37.395 "uuid": "e121cea3-6e90-52e5-b7d4-4e0a21fe55a7", 00:07:37.395 "is_configured": true, 00:07:37.395 "data_offset": 2048, 00:07:37.395 "data_size": 63488 00:07:37.395 } 00:07:37.395 ] 00:07:37.395 }' 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.395 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.654 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.654 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.654 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.913 [2024-12-07 02:40:48.730132] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.913 [2024-12-07 02:40:48.730174] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.913 { 00:07:37.913 "results": [ 00:07:37.913 { 00:07:37.913 "job": "raid_bdev1", 00:07:37.913 "core_mask": "0x1", 00:07:37.913 "workload": "randrw", 00:07:37.913 "percentage": 50, 00:07:37.913 "status": "finished", 00:07:37.913 "queue_depth": 1, 00:07:37.913 "io_size": 131072, 00:07:37.913 "runtime": 1.376504, 00:07:37.913 "iops": 16049.354015680303, 00:07:37.913 "mibps": 2006.1692519600379, 00:07:37.913 "io_failed": 1, 00:07:37.913 "io_timeout": 0, 00:07:37.913 "avg_latency_us": 87.18411463094577, 00:07:37.913 "min_latency_us": 24.146724890829695, 00:07:37.913 "max_latency_us": 2575.650655021834 00:07:37.913 } 00:07:37.913 ], 00:07:37.913 "core_count": 1 00:07:37.913 } 00:07:37.913 [2024-12-07 02:40:48.732760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.913 [2024-12-07 02:40:48.732813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.913 [2024-12-07 02:40:48.732852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.913 [2024-12-07 02:40:48.732862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:37.913 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.913 02:40:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72953 00:07:37.913 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72953 ']' 00:07:37.913 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72953 00:07:37.913 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:37.913 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:37.913 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72953 00:07:37.913 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:37.913 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:37.913 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72953' 00:07:37.913 killing process with pid 72953 00:07:37.913 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72953 00:07:37.913 [2024-12-07 02:40:48.782364] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.913 02:40:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72953 00:07:37.913 [2024-12-07 02:40:48.809716] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.173 02:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oSvAyaFJ9u 00:07:38.173 02:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:38.173 02:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:38.173 02:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:38.173 02:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:38.173 02:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.173 02:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.173 02:40:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:38.173 00:07:38.173 real 0m3.398s 00:07:38.173 user 0m4.166s 00:07:38.173 sys 0m0.613s 00:07:38.173 02:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.173 02:40:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.173 ************************************ 00:07:38.173 END TEST raid_read_error_test 00:07:38.173 ************************************ 00:07:38.173 02:40:49 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:38.173 02:40:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:38.173 02:40:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.174 02:40:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.434 ************************************ 00:07:38.434 START TEST raid_write_error_test 00:07:38.434 ************************************ 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.u0TcMFiWNg 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73088 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73088 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73088 ']' 00:07:38.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.434 02:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.435 02:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.435 02:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.435 02:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.435 [2024-12-07 02:40:49.359118] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:38.435 [2024-12-07 02:40:49.359238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73088 ] 00:07:38.435 [2024-12-07 02:40:49.503641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.694 [2024-12-07 02:40:49.576453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.695 [2024-12-07 02:40:49.652946] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:38.695 [2024-12-07 02:40:49.652991] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.264 BaseBdev1_malloc 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.264 true 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.264 [2024-12-07 02:40:50.227674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.264 [2024-12-07 02:40:50.227799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.264 [2024-12-07 02:40:50.227824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:39.264 [2024-12-07 02:40:50.227834] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.264 [2024-12-07 02:40:50.230253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.264 [2024-12-07 02:40:50.230290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.264 BaseBdev1 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.264 BaseBdev2_malloc 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.264 true 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.264 [2024-12-07 02:40:50.288574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:39.264 [2024-12-07 02:40:50.288658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.264 [2024-12-07 02:40:50.288686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.264 [2024-12-07 02:40:50.288701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.264 [2024-12-07 02:40:50.292142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.264 [2024-12-07 02:40:50.292274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:39.264 BaseBdev2 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.264 [2024-12-07 02:40:50.300560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.264 [2024-12-07 02:40:50.302847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.264 [2024-12-07 02:40:50.303039] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:39.264 [2024-12-07 02:40:50.303054] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.264 [2024-12-07 02:40:50.303344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:39.264 [2024-12-07 02:40:50.303492] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:39.264 [2024-12-07 02:40:50.303506] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:39.264 [2024-12-07 02:40:50.303657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.264 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.524 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.524 "name": "raid_bdev1", 00:07:39.524 "uuid": "b114a61d-a5f8-4b22-82d7-cbeab082e352", 00:07:39.524 "strip_size_kb": 64, 00:07:39.524 "state": "online", 00:07:39.524 "raid_level": "raid0", 00:07:39.524 "superblock": true, 00:07:39.524 "num_base_bdevs": 2, 00:07:39.524 "num_base_bdevs_discovered": 2, 00:07:39.524 "num_base_bdevs_operational": 2, 00:07:39.524 "base_bdevs_list": [ 00:07:39.524 { 00:07:39.524 "name": "BaseBdev1", 00:07:39.524 "uuid": "3751ed8b-b342-5d0c-96e9-55ec9091e08b", 00:07:39.524 "is_configured": true, 00:07:39.524 "data_offset": 2048, 00:07:39.524 "data_size": 63488 00:07:39.524 }, 00:07:39.524 { 00:07:39.524 "name": "BaseBdev2", 00:07:39.524 "uuid": "4f48f313-3023-5b19-a5fd-b173afaeb877", 00:07:39.524 "is_configured": true, 00:07:39.524 "data_offset": 2048, 00:07:39.524 "data_size": 63488 00:07:39.524 } 00:07:39.524 ] 00:07:39.524 }' 00:07:39.524 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.524 02:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.784 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:39.784 02:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:40.045 [2024-12-07 02:40:50.876058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.987 "name": "raid_bdev1", 00:07:40.987 "uuid": "b114a61d-a5f8-4b22-82d7-cbeab082e352", 00:07:40.987 "strip_size_kb": 64, 00:07:40.987 "state": "online", 00:07:40.987 "raid_level": "raid0", 00:07:40.987 "superblock": true, 00:07:40.987 "num_base_bdevs": 2, 00:07:40.987 "num_base_bdevs_discovered": 2, 00:07:40.987 "num_base_bdevs_operational": 2, 00:07:40.987 "base_bdevs_list": [ 00:07:40.987 { 00:07:40.987 "name": "BaseBdev1", 00:07:40.987 "uuid": "3751ed8b-b342-5d0c-96e9-55ec9091e08b", 00:07:40.987 "is_configured": true, 00:07:40.987 "data_offset": 2048, 00:07:40.987 "data_size": 63488 00:07:40.987 }, 00:07:40.987 { 00:07:40.987 "name": "BaseBdev2", 00:07:40.987 "uuid": "4f48f313-3023-5b19-a5fd-b173afaeb877", 00:07:40.987 "is_configured": true, 00:07:40.987 "data_offset": 2048, 00:07:40.987 "data_size": 63488 00:07:40.987 } 00:07:40.987 ] 00:07:40.987 }' 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.987 02:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.247 02:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.247 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.247 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.247 [2024-12-07 02:40:52.252423] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.247 [2024-12-07 02:40:52.252563] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.247 [2024-12-07 02:40:52.255089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.247 [2024-12-07 02:40:52.255141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.247 [2024-12-07 02:40:52.255181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.247 [2024-12-07 02:40:52.255190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:41.247 { 00:07:41.247 "results": [ 00:07:41.247 { 00:07:41.247 "job": "raid_bdev1", 00:07:41.247 "core_mask": "0x1", 00:07:41.247 "workload": "randrw", 00:07:41.247 "percentage": 50, 00:07:41.247 "status": "finished", 00:07:41.247 "queue_depth": 1, 00:07:41.247 "io_size": 131072, 00:07:41.247 "runtime": 1.377137, 00:07:41.247 "iops": 15778.386609320642, 00:07:41.247 "mibps": 1972.2983261650802, 00:07:41.247 "io_failed": 1, 00:07:41.247 "io_timeout": 0, 00:07:41.248 "avg_latency_us": 88.72589738694617, 00:07:41.248 "min_latency_us": 24.370305676855896, 00:07:41.248 "max_latency_us": 1359.3711790393013 00:07:41.248 } 00:07:41.248 ], 00:07:41.248 "core_count": 1 00:07:41.248 } 00:07:41.248 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.248 02:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73088 00:07:41.248 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73088 ']' 00:07:41.248 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73088 00:07:41.248 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:41.248 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.248 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73088 00:07:41.248 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.248 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.248 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73088' 00:07:41.248 killing process with pid 73088 00:07:41.248 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73088 00:07:41.248 [2024-12-07 02:40:52.301478] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.248 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73088 00:07:41.508 [2024-12-07 02:40:52.331111] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:41.769 02:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.u0TcMFiWNg 00:07:41.769 02:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:41.769 02:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:41.769 02:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:41.769 02:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:41.769 02:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:41.769 02:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:41.769 ************************************ 00:07:41.769 END TEST raid_write_error_test 00:07:41.769 ************************************ 00:07:41.769 02:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:41.769 00:07:41.769 real 0m3.458s 00:07:41.769 user 0m4.269s 00:07:41.769 sys 0m0.616s 00:07:41.769 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.769 02:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.769 02:40:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:41.769 02:40:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:41.769 02:40:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:41.769 02:40:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.769 02:40:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:41.769 ************************************ 00:07:41.769 START TEST raid_state_function_test 00:07:41.769 ************************************ 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73220 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73220' 00:07:41.769 Process raid pid: 73220 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73220 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73220 ']' 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.769 02:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.029 [2024-12-07 02:40:52.884386] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:42.029 [2024-12-07 02:40:52.884598] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.029 [2024-12-07 02:40:53.042656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.289 [2024-12-07 02:40:53.111833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.289 [2024-12-07 02:40:53.187070] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.289 [2024-12-07 02:40:53.187204] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.864 [2024-12-07 02:40:53.709775] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:42.864 [2024-12-07 02:40:53.709831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:42.864 [2024-12-07 02:40:53.709844] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.864 [2024-12-07 02:40:53.709854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.864 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.864 "name": "Existed_Raid", 00:07:42.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.864 "strip_size_kb": 64, 00:07:42.864 "state": "configuring", 00:07:42.864 "raid_level": "concat", 00:07:42.864 "superblock": false, 00:07:42.864 "num_base_bdevs": 2, 00:07:42.864 "num_base_bdevs_discovered": 0, 00:07:42.864 "num_base_bdevs_operational": 2, 00:07:42.864 "base_bdevs_list": [ 00:07:42.864 { 00:07:42.864 "name": "BaseBdev1", 00:07:42.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.865 "is_configured": false, 00:07:42.865 "data_offset": 0, 00:07:42.865 "data_size": 0 00:07:42.865 }, 00:07:42.865 { 00:07:42.865 "name": "BaseBdev2", 00:07:42.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.865 "is_configured": false, 00:07:42.865 "data_offset": 0, 00:07:42.865 "data_size": 0 00:07:42.865 } 00:07:42.865 ] 00:07:42.865 }' 00:07:42.865 02:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.865 02:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.125 [2024-12-07 02:40:54.132944] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.125 [2024-12-07 02:40:54.133082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.125 [2024-12-07 02:40:54.144969] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:43.125 [2024-12-07 02:40:54.145009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:43.125 [2024-12-07 02:40:54.145019] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.125 [2024-12-07 02:40:54.145028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.125 [2024-12-07 02:40:54.171997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.125 BaseBdev1 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.125 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.125 [ 00:07:43.125 { 00:07:43.125 "name": "BaseBdev1", 00:07:43.125 "aliases": [ 00:07:43.125 "2afc8df2-a3d8-49d1-8f2b-218779d3c48f" 00:07:43.125 ], 00:07:43.125 "product_name": "Malloc disk", 00:07:43.125 "block_size": 512, 00:07:43.125 "num_blocks": 65536, 00:07:43.125 "uuid": "2afc8df2-a3d8-49d1-8f2b-218779d3c48f", 00:07:43.125 "assigned_rate_limits": { 00:07:43.125 "rw_ios_per_sec": 0, 00:07:43.125 "rw_mbytes_per_sec": 0, 00:07:43.125 "r_mbytes_per_sec": 0, 00:07:43.125 "w_mbytes_per_sec": 0 00:07:43.125 }, 00:07:43.125 "claimed": true, 00:07:43.125 "claim_type": "exclusive_write", 00:07:43.125 "zoned": false, 00:07:43.125 "supported_io_types": { 00:07:43.125 "read": true, 00:07:43.125 "write": true, 00:07:43.125 "unmap": true, 00:07:43.125 "flush": true, 00:07:43.125 "reset": true, 00:07:43.386 "nvme_admin": false, 00:07:43.386 "nvme_io": false, 00:07:43.386 "nvme_io_md": false, 00:07:43.386 "write_zeroes": true, 00:07:43.386 "zcopy": true, 00:07:43.386 "get_zone_info": false, 00:07:43.386 "zone_management": false, 00:07:43.386 "zone_append": false, 00:07:43.386 "compare": false, 00:07:43.386 "compare_and_write": false, 00:07:43.386 "abort": true, 00:07:43.386 "seek_hole": false, 00:07:43.386 "seek_data": false, 00:07:43.386 "copy": true, 00:07:43.386 "nvme_iov_md": false 00:07:43.386 }, 00:07:43.386 "memory_domains": [ 00:07:43.386 { 00:07:43.386 "dma_device_id": "system", 00:07:43.386 "dma_device_type": 1 00:07:43.386 }, 00:07:43.386 { 00:07:43.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.386 "dma_device_type": 2 00:07:43.386 } 00:07:43.386 ], 00:07:43.386 "driver_specific": {} 00:07:43.386 } 00:07:43.386 ] 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.386 "name": "Existed_Raid", 00:07:43.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.386 "strip_size_kb": 64, 00:07:43.386 "state": "configuring", 00:07:43.386 "raid_level": "concat", 00:07:43.386 "superblock": false, 00:07:43.386 "num_base_bdevs": 2, 00:07:43.386 "num_base_bdevs_discovered": 1, 00:07:43.386 "num_base_bdevs_operational": 2, 00:07:43.386 "base_bdevs_list": [ 00:07:43.386 { 00:07:43.386 "name": "BaseBdev1", 00:07:43.386 "uuid": "2afc8df2-a3d8-49d1-8f2b-218779d3c48f", 00:07:43.386 "is_configured": true, 00:07:43.386 "data_offset": 0, 00:07:43.386 "data_size": 65536 00:07:43.386 }, 00:07:43.386 { 00:07:43.386 "name": "BaseBdev2", 00:07:43.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.386 "is_configured": false, 00:07:43.386 "data_offset": 0, 00:07:43.386 "data_size": 0 00:07:43.386 } 00:07:43.386 ] 00:07:43.386 }' 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.386 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.646 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:43.646 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.646 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.647 [2024-12-07 02:40:54.659613] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.647 [2024-12-07 02:40:54.659722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.647 [2024-12-07 02:40:54.667653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:43.647 [2024-12-07 02:40:54.669761] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:43.647 [2024-12-07 02:40:54.669800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.647 "name": "Existed_Raid", 00:07:43.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.647 "strip_size_kb": 64, 00:07:43.647 "state": "configuring", 00:07:43.647 "raid_level": "concat", 00:07:43.647 "superblock": false, 00:07:43.647 "num_base_bdevs": 2, 00:07:43.647 "num_base_bdevs_discovered": 1, 00:07:43.647 "num_base_bdevs_operational": 2, 00:07:43.647 "base_bdevs_list": [ 00:07:43.647 { 00:07:43.647 "name": "BaseBdev1", 00:07:43.647 "uuid": "2afc8df2-a3d8-49d1-8f2b-218779d3c48f", 00:07:43.647 "is_configured": true, 00:07:43.647 "data_offset": 0, 00:07:43.647 "data_size": 65536 00:07:43.647 }, 00:07:43.647 { 00:07:43.647 "name": "BaseBdev2", 00:07:43.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.647 "is_configured": false, 00:07:43.647 "data_offset": 0, 00:07:43.647 "data_size": 0 00:07:43.647 } 00:07:43.647 ] 00:07:43.647 }' 00:07:43.647 02:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.907 02:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.167 [2024-12-07 02:40:55.058718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:44.167 [2024-12-07 02:40:55.058952] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:44.167 [2024-12-07 02:40:55.059034] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:44.167 [2024-12-07 02:40:55.059931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:44.167 [2024-12-07 02:40:55.060454] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:44.167 [2024-12-07 02:40:55.060631] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:44.167 [2024-12-07 02:40:55.061301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.167 BaseBdev2 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.167 [ 00:07:44.167 { 00:07:44.167 "name": "BaseBdev2", 00:07:44.167 "aliases": [ 00:07:44.167 "5a666c32-e553-4954-a008-eef34510340d" 00:07:44.167 ], 00:07:44.167 "product_name": "Malloc disk", 00:07:44.167 "block_size": 512, 00:07:44.167 "num_blocks": 65536, 00:07:44.167 "uuid": "5a666c32-e553-4954-a008-eef34510340d", 00:07:44.167 "assigned_rate_limits": { 00:07:44.167 "rw_ios_per_sec": 0, 00:07:44.167 "rw_mbytes_per_sec": 0, 00:07:44.167 "r_mbytes_per_sec": 0, 00:07:44.167 "w_mbytes_per_sec": 0 00:07:44.167 }, 00:07:44.167 "claimed": true, 00:07:44.167 "claim_type": "exclusive_write", 00:07:44.167 "zoned": false, 00:07:44.167 "supported_io_types": { 00:07:44.167 "read": true, 00:07:44.167 "write": true, 00:07:44.167 "unmap": true, 00:07:44.167 "flush": true, 00:07:44.167 "reset": true, 00:07:44.167 "nvme_admin": false, 00:07:44.167 "nvme_io": false, 00:07:44.167 "nvme_io_md": false, 00:07:44.167 "write_zeroes": true, 00:07:44.167 "zcopy": true, 00:07:44.167 "get_zone_info": false, 00:07:44.167 "zone_management": false, 00:07:44.167 "zone_append": false, 00:07:44.167 "compare": false, 00:07:44.167 "compare_and_write": false, 00:07:44.167 "abort": true, 00:07:44.167 "seek_hole": false, 00:07:44.167 "seek_data": false, 00:07:44.167 "copy": true, 00:07:44.167 "nvme_iov_md": false 00:07:44.167 }, 00:07:44.167 "memory_domains": [ 00:07:44.167 { 00:07:44.167 "dma_device_id": "system", 00:07:44.167 "dma_device_type": 1 00:07:44.167 }, 00:07:44.167 { 00:07:44.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.167 "dma_device_type": 2 00:07:44.167 } 00:07:44.167 ], 00:07:44.167 "driver_specific": {} 00:07:44.167 } 00:07:44.167 ] 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.167 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.167 "name": "Existed_Raid", 00:07:44.167 "uuid": "93ddf8ae-d695-4712-9f0f-41c481f435ce", 00:07:44.167 "strip_size_kb": 64, 00:07:44.167 "state": "online", 00:07:44.167 "raid_level": "concat", 00:07:44.167 "superblock": false, 00:07:44.167 "num_base_bdevs": 2, 00:07:44.167 "num_base_bdevs_discovered": 2, 00:07:44.167 "num_base_bdevs_operational": 2, 00:07:44.167 "base_bdevs_list": [ 00:07:44.167 { 00:07:44.167 "name": "BaseBdev1", 00:07:44.167 "uuid": "2afc8df2-a3d8-49d1-8f2b-218779d3c48f", 00:07:44.167 "is_configured": true, 00:07:44.167 "data_offset": 0, 00:07:44.167 "data_size": 65536 00:07:44.168 }, 00:07:44.168 { 00:07:44.168 "name": "BaseBdev2", 00:07:44.168 "uuid": "5a666c32-e553-4954-a008-eef34510340d", 00:07:44.168 "is_configured": true, 00:07:44.168 "data_offset": 0, 00:07:44.168 "data_size": 65536 00:07:44.168 } 00:07:44.168 ] 00:07:44.168 }' 00:07:44.168 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.168 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.739 [2024-12-07 02:40:55.542070] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:44.739 "name": "Existed_Raid", 00:07:44.739 "aliases": [ 00:07:44.739 "93ddf8ae-d695-4712-9f0f-41c481f435ce" 00:07:44.739 ], 00:07:44.739 "product_name": "Raid Volume", 00:07:44.739 "block_size": 512, 00:07:44.739 "num_blocks": 131072, 00:07:44.739 "uuid": "93ddf8ae-d695-4712-9f0f-41c481f435ce", 00:07:44.739 "assigned_rate_limits": { 00:07:44.739 "rw_ios_per_sec": 0, 00:07:44.739 "rw_mbytes_per_sec": 0, 00:07:44.739 "r_mbytes_per_sec": 0, 00:07:44.739 "w_mbytes_per_sec": 0 00:07:44.739 }, 00:07:44.739 "claimed": false, 00:07:44.739 "zoned": false, 00:07:44.739 "supported_io_types": { 00:07:44.739 "read": true, 00:07:44.739 "write": true, 00:07:44.739 "unmap": true, 00:07:44.739 "flush": true, 00:07:44.739 "reset": true, 00:07:44.739 "nvme_admin": false, 00:07:44.739 "nvme_io": false, 00:07:44.739 "nvme_io_md": false, 00:07:44.739 "write_zeroes": true, 00:07:44.739 "zcopy": false, 00:07:44.739 "get_zone_info": false, 00:07:44.739 "zone_management": false, 00:07:44.739 "zone_append": false, 00:07:44.739 "compare": false, 00:07:44.739 "compare_and_write": false, 00:07:44.739 "abort": false, 00:07:44.739 "seek_hole": false, 00:07:44.739 "seek_data": false, 00:07:44.739 "copy": false, 00:07:44.739 "nvme_iov_md": false 00:07:44.739 }, 00:07:44.739 "memory_domains": [ 00:07:44.739 { 00:07:44.739 "dma_device_id": "system", 00:07:44.739 "dma_device_type": 1 00:07:44.739 }, 00:07:44.739 { 00:07:44.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.739 "dma_device_type": 2 00:07:44.739 }, 00:07:44.739 { 00:07:44.739 "dma_device_id": "system", 00:07:44.739 "dma_device_type": 1 00:07:44.739 }, 00:07:44.739 { 00:07:44.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.739 "dma_device_type": 2 00:07:44.739 } 00:07:44.739 ], 00:07:44.739 "driver_specific": { 00:07:44.739 "raid": { 00:07:44.739 "uuid": "93ddf8ae-d695-4712-9f0f-41c481f435ce", 00:07:44.739 "strip_size_kb": 64, 00:07:44.739 "state": "online", 00:07:44.739 "raid_level": "concat", 00:07:44.739 "superblock": false, 00:07:44.739 "num_base_bdevs": 2, 00:07:44.739 "num_base_bdevs_discovered": 2, 00:07:44.739 "num_base_bdevs_operational": 2, 00:07:44.739 "base_bdevs_list": [ 00:07:44.739 { 00:07:44.739 "name": "BaseBdev1", 00:07:44.739 "uuid": "2afc8df2-a3d8-49d1-8f2b-218779d3c48f", 00:07:44.739 "is_configured": true, 00:07:44.739 "data_offset": 0, 00:07:44.739 "data_size": 65536 00:07:44.739 }, 00:07:44.739 { 00:07:44.739 "name": "BaseBdev2", 00:07:44.739 "uuid": "5a666c32-e553-4954-a008-eef34510340d", 00:07:44.739 "is_configured": true, 00:07:44.739 "data_offset": 0, 00:07:44.739 "data_size": 65536 00:07:44.739 } 00:07:44.739 ] 00:07:44.739 } 00:07:44.739 } 00:07:44.739 }' 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:44.739 BaseBdev2' 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.739 [2024-12-07 02:40:55.757510] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:44.739 [2024-12-07 02:40:55.757554] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.739 [2024-12-07 02:40:55.757628] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.739 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.740 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.740 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.740 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.740 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.740 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.740 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.000 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.000 "name": "Existed_Raid", 00:07:45.000 "uuid": "93ddf8ae-d695-4712-9f0f-41c481f435ce", 00:07:45.000 "strip_size_kb": 64, 00:07:45.000 "state": "offline", 00:07:45.000 "raid_level": "concat", 00:07:45.000 "superblock": false, 00:07:45.000 "num_base_bdevs": 2, 00:07:45.000 "num_base_bdevs_discovered": 1, 00:07:45.000 "num_base_bdevs_operational": 1, 00:07:45.000 "base_bdevs_list": [ 00:07:45.000 { 00:07:45.000 "name": null, 00:07:45.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.000 "is_configured": false, 00:07:45.000 "data_offset": 0, 00:07:45.000 "data_size": 65536 00:07:45.000 }, 00:07:45.000 { 00:07:45.000 "name": "BaseBdev2", 00:07:45.000 "uuid": "5a666c32-e553-4954-a008-eef34510340d", 00:07:45.000 "is_configured": true, 00:07:45.000 "data_offset": 0, 00:07:45.000 "data_size": 65536 00:07:45.000 } 00:07:45.000 ] 00:07:45.000 }' 00:07:45.000 02:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.000 02:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.259 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:45.259 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:45.259 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.259 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.259 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.259 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:45.259 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.259 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:45.259 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:45.259 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:45.259 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.259 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.259 [2024-12-07 02:40:56.265583] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:45.259 [2024-12-07 02:40:56.265718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:45.260 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.260 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:45.260 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:45.260 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.260 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:45.260 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.260 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.260 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73220 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73220 ']' 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73220 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73220 00:07:45.519 killing process with pid 73220 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73220' 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73220 00:07:45.519 [2024-12-07 02:40:56.385269] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.519 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73220 00:07:45.519 [2024-12-07 02:40:56.386801] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:45.779 00:07:45.779 real 0m3.965s 00:07:45.779 user 0m6.002s 00:07:45.779 sys 0m0.880s 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.779 ************************************ 00:07:45.779 END TEST raid_state_function_test 00:07:45.779 ************************************ 00:07:45.779 02:40:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:45.779 02:40:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:45.779 02:40:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.779 02:40:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.779 ************************************ 00:07:45.779 START TEST raid_state_function_test_sb 00:07:45.779 ************************************ 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73462 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73462' 00:07:45.779 Process raid pid: 73462 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73462 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73462 ']' 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.779 02:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.040 [2024-12-07 02:40:56.917444] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:46.040 [2024-12-07 02:40:56.917687] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.040 [2024-12-07 02:40:57.077247] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.315 [2024-12-07 02:40:57.151039] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.315 [2024-12-07 02:40:57.228218] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.315 [2024-12-07 02:40:57.228365] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 [2024-12-07 02:40:57.760371] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:46.885 [2024-12-07 02:40:57.760509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:46.885 [2024-12-07 02:40:57.760538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.885 [2024-12-07 02:40:57.760550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.885 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.885 "name": "Existed_Raid", 00:07:46.886 "uuid": "ba48229d-7d6c-4bf0-a720-3df5e7a6ba82", 00:07:46.886 "strip_size_kb": 64, 00:07:46.886 "state": "configuring", 00:07:46.886 "raid_level": "concat", 00:07:46.886 "superblock": true, 00:07:46.886 "num_base_bdevs": 2, 00:07:46.886 "num_base_bdevs_discovered": 0, 00:07:46.886 "num_base_bdevs_operational": 2, 00:07:46.886 "base_bdevs_list": [ 00:07:46.886 { 00:07:46.886 "name": "BaseBdev1", 00:07:46.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.886 "is_configured": false, 00:07:46.886 "data_offset": 0, 00:07:46.886 "data_size": 0 00:07:46.886 }, 00:07:46.886 { 00:07:46.886 "name": "BaseBdev2", 00:07:46.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.886 "is_configured": false, 00:07:46.886 "data_offset": 0, 00:07:46.886 "data_size": 0 00:07:46.886 } 00:07:46.886 ] 00:07:46.886 }' 00:07:46.886 02:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.886 02:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.146 [2024-12-07 02:40:58.159638] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.146 [2024-12-07 02:40:58.159756] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.146 [2024-12-07 02:40:58.171687] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:47.146 [2024-12-07 02:40:58.171765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:47.146 [2024-12-07 02:40:58.171790] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.146 [2024-12-07 02:40:58.171812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.146 [2024-12-07 02:40:58.198513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.146 BaseBdev1 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.146 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.407 [ 00:07:47.407 { 00:07:47.407 "name": "BaseBdev1", 00:07:47.407 "aliases": [ 00:07:47.407 "ad570956-6875-4a22-83cc-80fbf130d8b4" 00:07:47.407 ], 00:07:47.407 "product_name": "Malloc disk", 00:07:47.407 "block_size": 512, 00:07:47.407 "num_blocks": 65536, 00:07:47.407 "uuid": "ad570956-6875-4a22-83cc-80fbf130d8b4", 00:07:47.407 "assigned_rate_limits": { 00:07:47.407 "rw_ios_per_sec": 0, 00:07:47.407 "rw_mbytes_per_sec": 0, 00:07:47.407 "r_mbytes_per_sec": 0, 00:07:47.407 "w_mbytes_per_sec": 0 00:07:47.407 }, 00:07:47.407 "claimed": true, 00:07:47.407 "claim_type": "exclusive_write", 00:07:47.407 "zoned": false, 00:07:47.407 "supported_io_types": { 00:07:47.407 "read": true, 00:07:47.407 "write": true, 00:07:47.407 "unmap": true, 00:07:47.407 "flush": true, 00:07:47.407 "reset": true, 00:07:47.407 "nvme_admin": false, 00:07:47.407 "nvme_io": false, 00:07:47.407 "nvme_io_md": false, 00:07:47.407 "write_zeroes": true, 00:07:47.407 "zcopy": true, 00:07:47.407 "get_zone_info": false, 00:07:47.407 "zone_management": false, 00:07:47.407 "zone_append": false, 00:07:47.407 "compare": false, 00:07:47.407 "compare_and_write": false, 00:07:47.407 "abort": true, 00:07:47.407 "seek_hole": false, 00:07:47.407 "seek_data": false, 00:07:47.407 "copy": true, 00:07:47.407 "nvme_iov_md": false 00:07:47.407 }, 00:07:47.407 "memory_domains": [ 00:07:47.407 { 00:07:47.407 "dma_device_id": "system", 00:07:47.407 "dma_device_type": 1 00:07:47.407 }, 00:07:47.407 { 00:07:47.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.407 "dma_device_type": 2 00:07:47.407 } 00:07:47.407 ], 00:07:47.407 "driver_specific": {} 00:07:47.407 } 00:07:47.407 ] 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.407 "name": "Existed_Raid", 00:07:47.407 "uuid": "79173741-13e5-46fa-bbec-d73d3805f351", 00:07:47.407 "strip_size_kb": 64, 00:07:47.407 "state": "configuring", 00:07:47.407 "raid_level": "concat", 00:07:47.407 "superblock": true, 00:07:47.407 "num_base_bdevs": 2, 00:07:47.407 "num_base_bdevs_discovered": 1, 00:07:47.407 "num_base_bdevs_operational": 2, 00:07:47.407 "base_bdevs_list": [ 00:07:47.407 { 00:07:47.407 "name": "BaseBdev1", 00:07:47.407 "uuid": "ad570956-6875-4a22-83cc-80fbf130d8b4", 00:07:47.407 "is_configured": true, 00:07:47.407 "data_offset": 2048, 00:07:47.407 "data_size": 63488 00:07:47.407 }, 00:07:47.407 { 00:07:47.407 "name": "BaseBdev2", 00:07:47.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.407 "is_configured": false, 00:07:47.407 "data_offset": 0, 00:07:47.407 "data_size": 0 00:07:47.407 } 00:07:47.407 ] 00:07:47.407 }' 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.407 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.667 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:47.667 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.667 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.667 [2024-12-07 02:40:58.701674] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:47.668 [2024-12-07 02:40:58.701722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.668 [2024-12-07 02:40:58.713700] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.668 [2024-12-07 02:40:58.715794] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.668 [2024-12-07 02:40:58.715883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.668 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.928 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.928 "name": "Existed_Raid", 00:07:47.928 "uuid": "5b38dae2-6e7d-41ad-adcf-0001c6bc83c1", 00:07:47.928 "strip_size_kb": 64, 00:07:47.928 "state": "configuring", 00:07:47.928 "raid_level": "concat", 00:07:47.928 "superblock": true, 00:07:47.928 "num_base_bdevs": 2, 00:07:47.928 "num_base_bdevs_discovered": 1, 00:07:47.928 "num_base_bdevs_operational": 2, 00:07:47.928 "base_bdevs_list": [ 00:07:47.928 { 00:07:47.928 "name": "BaseBdev1", 00:07:47.928 "uuid": "ad570956-6875-4a22-83cc-80fbf130d8b4", 00:07:47.928 "is_configured": true, 00:07:47.928 "data_offset": 2048, 00:07:47.928 "data_size": 63488 00:07:47.928 }, 00:07:47.928 { 00:07:47.928 "name": "BaseBdev2", 00:07:47.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.928 "is_configured": false, 00:07:47.928 "data_offset": 0, 00:07:47.928 "data_size": 0 00:07:47.928 } 00:07:47.928 ] 00:07:47.928 }' 00:07:47.928 02:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.928 02:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.263 [2024-12-07 02:40:59.199064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:48.263 [2024-12-07 02:40:59.199748] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:48.263 [2024-12-07 02:40:59.199897] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:48.263 BaseBdev2 00:07:48.263 [2024-12-07 02:40:59.200790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.263 [2024-12-07 02:40:59.201279] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:48.263 [2024-12-07 02:40:59.201439] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:48.263 [2024-12-07 02:40:59.201903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.263 [ 00:07:48.263 { 00:07:48.263 "name": "BaseBdev2", 00:07:48.263 "aliases": [ 00:07:48.263 "6caa6e3d-34c2-47e5-9975-52f04c2860b7" 00:07:48.263 ], 00:07:48.263 "product_name": "Malloc disk", 00:07:48.263 "block_size": 512, 00:07:48.263 "num_blocks": 65536, 00:07:48.263 "uuid": "6caa6e3d-34c2-47e5-9975-52f04c2860b7", 00:07:48.263 "assigned_rate_limits": { 00:07:48.263 "rw_ios_per_sec": 0, 00:07:48.263 "rw_mbytes_per_sec": 0, 00:07:48.263 "r_mbytes_per_sec": 0, 00:07:48.263 "w_mbytes_per_sec": 0 00:07:48.263 }, 00:07:48.263 "claimed": true, 00:07:48.263 "claim_type": "exclusive_write", 00:07:48.263 "zoned": false, 00:07:48.263 "supported_io_types": { 00:07:48.263 "read": true, 00:07:48.263 "write": true, 00:07:48.263 "unmap": true, 00:07:48.263 "flush": true, 00:07:48.263 "reset": true, 00:07:48.263 "nvme_admin": false, 00:07:48.263 "nvme_io": false, 00:07:48.263 "nvme_io_md": false, 00:07:48.263 "write_zeroes": true, 00:07:48.263 "zcopy": true, 00:07:48.263 "get_zone_info": false, 00:07:48.263 "zone_management": false, 00:07:48.263 "zone_append": false, 00:07:48.263 "compare": false, 00:07:48.263 "compare_and_write": false, 00:07:48.263 "abort": true, 00:07:48.263 "seek_hole": false, 00:07:48.263 "seek_data": false, 00:07:48.263 "copy": true, 00:07:48.263 "nvme_iov_md": false 00:07:48.263 }, 00:07:48.263 "memory_domains": [ 00:07:48.263 { 00:07:48.263 "dma_device_id": "system", 00:07:48.263 "dma_device_type": 1 00:07:48.263 }, 00:07:48.263 { 00:07:48.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.263 "dma_device_type": 2 00:07:48.263 } 00:07:48.263 ], 00:07:48.263 "driver_specific": {} 00:07:48.263 } 00:07:48.263 ] 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.263 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.264 "name": "Existed_Raid", 00:07:48.264 "uuid": "5b38dae2-6e7d-41ad-adcf-0001c6bc83c1", 00:07:48.264 "strip_size_kb": 64, 00:07:48.264 "state": "online", 00:07:48.264 "raid_level": "concat", 00:07:48.264 "superblock": true, 00:07:48.264 "num_base_bdevs": 2, 00:07:48.264 "num_base_bdevs_discovered": 2, 00:07:48.264 "num_base_bdevs_operational": 2, 00:07:48.264 "base_bdevs_list": [ 00:07:48.264 { 00:07:48.264 "name": "BaseBdev1", 00:07:48.264 "uuid": "ad570956-6875-4a22-83cc-80fbf130d8b4", 00:07:48.264 "is_configured": true, 00:07:48.264 "data_offset": 2048, 00:07:48.264 "data_size": 63488 00:07:48.264 }, 00:07:48.264 { 00:07:48.264 "name": "BaseBdev2", 00:07:48.264 "uuid": "6caa6e3d-34c2-47e5-9975-52f04c2860b7", 00:07:48.264 "is_configured": true, 00:07:48.264 "data_offset": 2048, 00:07:48.264 "data_size": 63488 00:07:48.264 } 00:07:48.264 ] 00:07:48.264 }' 00:07:48.264 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.264 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.839 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:48.839 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:48.839 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:48.839 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:48.839 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:48.839 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:48.839 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:48.839 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.839 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.839 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:48.840 [2024-12-07 02:40:59.706418] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:48.840 "name": "Existed_Raid", 00:07:48.840 "aliases": [ 00:07:48.840 "5b38dae2-6e7d-41ad-adcf-0001c6bc83c1" 00:07:48.840 ], 00:07:48.840 "product_name": "Raid Volume", 00:07:48.840 "block_size": 512, 00:07:48.840 "num_blocks": 126976, 00:07:48.840 "uuid": "5b38dae2-6e7d-41ad-adcf-0001c6bc83c1", 00:07:48.840 "assigned_rate_limits": { 00:07:48.840 "rw_ios_per_sec": 0, 00:07:48.840 "rw_mbytes_per_sec": 0, 00:07:48.840 "r_mbytes_per_sec": 0, 00:07:48.840 "w_mbytes_per_sec": 0 00:07:48.840 }, 00:07:48.840 "claimed": false, 00:07:48.840 "zoned": false, 00:07:48.840 "supported_io_types": { 00:07:48.840 "read": true, 00:07:48.840 "write": true, 00:07:48.840 "unmap": true, 00:07:48.840 "flush": true, 00:07:48.840 "reset": true, 00:07:48.840 "nvme_admin": false, 00:07:48.840 "nvme_io": false, 00:07:48.840 "nvme_io_md": false, 00:07:48.840 "write_zeroes": true, 00:07:48.840 "zcopy": false, 00:07:48.840 "get_zone_info": false, 00:07:48.840 "zone_management": false, 00:07:48.840 "zone_append": false, 00:07:48.840 "compare": false, 00:07:48.840 "compare_and_write": false, 00:07:48.840 "abort": false, 00:07:48.840 "seek_hole": false, 00:07:48.840 "seek_data": false, 00:07:48.840 "copy": false, 00:07:48.840 "nvme_iov_md": false 00:07:48.840 }, 00:07:48.840 "memory_domains": [ 00:07:48.840 { 00:07:48.840 "dma_device_id": "system", 00:07:48.840 "dma_device_type": 1 00:07:48.840 }, 00:07:48.840 { 00:07:48.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.840 "dma_device_type": 2 00:07:48.840 }, 00:07:48.840 { 00:07:48.840 "dma_device_id": "system", 00:07:48.840 "dma_device_type": 1 00:07:48.840 }, 00:07:48.840 { 00:07:48.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.840 "dma_device_type": 2 00:07:48.840 } 00:07:48.840 ], 00:07:48.840 "driver_specific": { 00:07:48.840 "raid": { 00:07:48.840 "uuid": "5b38dae2-6e7d-41ad-adcf-0001c6bc83c1", 00:07:48.840 "strip_size_kb": 64, 00:07:48.840 "state": "online", 00:07:48.840 "raid_level": "concat", 00:07:48.840 "superblock": true, 00:07:48.840 "num_base_bdevs": 2, 00:07:48.840 "num_base_bdevs_discovered": 2, 00:07:48.840 "num_base_bdevs_operational": 2, 00:07:48.840 "base_bdevs_list": [ 00:07:48.840 { 00:07:48.840 "name": "BaseBdev1", 00:07:48.840 "uuid": "ad570956-6875-4a22-83cc-80fbf130d8b4", 00:07:48.840 "is_configured": true, 00:07:48.840 "data_offset": 2048, 00:07:48.840 "data_size": 63488 00:07:48.840 }, 00:07:48.840 { 00:07:48.840 "name": "BaseBdev2", 00:07:48.840 "uuid": "6caa6e3d-34c2-47e5-9975-52f04c2860b7", 00:07:48.840 "is_configured": true, 00:07:48.840 "data_offset": 2048, 00:07:48.840 "data_size": 63488 00:07:48.840 } 00:07:48.840 ] 00:07:48.840 } 00:07:48.840 } 00:07:48.840 }' 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:48.840 BaseBdev2' 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.840 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.840 [2024-12-07 02:40:59.913845] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:48.840 [2024-12-07 02:40:59.913872] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.840 [2024-12-07 02:40:59.913933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.101 "name": "Existed_Raid", 00:07:49.101 "uuid": "5b38dae2-6e7d-41ad-adcf-0001c6bc83c1", 00:07:49.101 "strip_size_kb": 64, 00:07:49.101 "state": "offline", 00:07:49.101 "raid_level": "concat", 00:07:49.101 "superblock": true, 00:07:49.101 "num_base_bdevs": 2, 00:07:49.101 "num_base_bdevs_discovered": 1, 00:07:49.101 "num_base_bdevs_operational": 1, 00:07:49.101 "base_bdevs_list": [ 00:07:49.101 { 00:07:49.101 "name": null, 00:07:49.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.101 "is_configured": false, 00:07:49.101 "data_offset": 0, 00:07:49.101 "data_size": 63488 00:07:49.101 }, 00:07:49.101 { 00:07:49.101 "name": "BaseBdev2", 00:07:49.101 "uuid": "6caa6e3d-34c2-47e5-9975-52f04c2860b7", 00:07:49.101 "is_configured": true, 00:07:49.101 "data_offset": 2048, 00:07:49.101 "data_size": 63488 00:07:49.101 } 00:07:49.101 ] 00:07:49.101 }' 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.101 02:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.361 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:49.361 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:49.361 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:49.361 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.361 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.361 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.621 [2024-12-07 02:41:00.481471] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:49.621 [2024-12-07 02:41:00.481613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73462 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73462 ']' 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73462 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73462 00:07:49.621 killing process with pid 73462 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73462' 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73462 00:07:49.621 [2024-12-07 02:41:00.594780] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.621 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73462 00:07:49.621 [2024-12-07 02:41:00.596353] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.192 02:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:50.192 00:07:50.192 real 0m4.137s 00:07:50.192 user 0m6.301s 00:07:50.192 sys 0m0.912s 00:07:50.192 ************************************ 00:07:50.192 END TEST raid_state_function_test_sb 00:07:50.192 ************************************ 00:07:50.192 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.193 02:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.193 02:41:01 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:50.193 02:41:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:50.193 02:41:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.193 02:41:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.193 ************************************ 00:07:50.193 START TEST raid_superblock_test 00:07:50.193 ************************************ 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73703 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73703 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73703 ']' 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.193 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.193 [2024-12-07 02:41:01.118378] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:50.193 [2024-12-07 02:41:01.118586] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73703 ] 00:07:50.453 [2024-12-07 02:41:01.277289] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.453 [2024-12-07 02:41:01.346080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.453 [2024-12-07 02:41:01.420797] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.453 [2024-12-07 02:41:01.420851] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.022 malloc1 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.022 [2024-12-07 02:41:01.962860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:51.022 [2024-12-07 02:41:01.963004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.022 [2024-12-07 02:41:01.963042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:51.022 [2024-12-07 02:41:01.963077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.022 [2024-12-07 02:41:01.965429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.022 [2024-12-07 02:41:01.965499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:51.022 pt1 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.022 02:41:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.022 malloc2 00:07:51.022 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.022 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:51.022 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.023 [2024-12-07 02:41:02.017407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:51.023 [2024-12-07 02:41:02.017502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.023 [2024-12-07 02:41:02.017537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:51.023 [2024-12-07 02:41:02.017560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.023 [2024-12-07 02:41:02.022115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.023 [2024-12-07 02:41:02.022181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:51.023 pt2 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.023 [2024-12-07 02:41:02.030438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:51.023 [2024-12-07 02:41:02.033277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:51.023 [2024-12-07 02:41:02.033553] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:07:51.023 [2024-12-07 02:41:02.033604] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:51.023 [2024-12-07 02:41:02.033970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:51.023 [2024-12-07 02:41:02.034139] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:07:51.023 [2024-12-07 02:41:02.034152] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:07:51.023 [2024-12-07 02:41:02.034388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.023 "name": "raid_bdev1", 00:07:51.023 "uuid": "cd1fee81-6a9b-4695-b289-cfdce28b97e3", 00:07:51.023 "strip_size_kb": 64, 00:07:51.023 "state": "online", 00:07:51.023 "raid_level": "concat", 00:07:51.023 "superblock": true, 00:07:51.023 "num_base_bdevs": 2, 00:07:51.023 "num_base_bdevs_discovered": 2, 00:07:51.023 "num_base_bdevs_operational": 2, 00:07:51.023 "base_bdevs_list": [ 00:07:51.023 { 00:07:51.023 "name": "pt1", 00:07:51.023 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.023 "is_configured": true, 00:07:51.023 "data_offset": 2048, 00:07:51.023 "data_size": 63488 00:07:51.023 }, 00:07:51.023 { 00:07:51.023 "name": "pt2", 00:07:51.023 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.023 "is_configured": true, 00:07:51.023 "data_offset": 2048, 00:07:51.023 "data_size": 63488 00:07:51.023 } 00:07:51.023 ] 00:07:51.023 }' 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.023 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.592 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:51.592 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:51.592 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:51.592 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:51.592 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:51.592 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:51.592 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.592 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.592 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:51.592 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.592 [2024-12-07 02:41:02.477957] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.592 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.592 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:51.592 "name": "raid_bdev1", 00:07:51.592 "aliases": [ 00:07:51.592 "cd1fee81-6a9b-4695-b289-cfdce28b97e3" 00:07:51.592 ], 00:07:51.592 "product_name": "Raid Volume", 00:07:51.592 "block_size": 512, 00:07:51.592 "num_blocks": 126976, 00:07:51.593 "uuid": "cd1fee81-6a9b-4695-b289-cfdce28b97e3", 00:07:51.593 "assigned_rate_limits": { 00:07:51.593 "rw_ios_per_sec": 0, 00:07:51.593 "rw_mbytes_per_sec": 0, 00:07:51.593 "r_mbytes_per_sec": 0, 00:07:51.593 "w_mbytes_per_sec": 0 00:07:51.593 }, 00:07:51.593 "claimed": false, 00:07:51.593 "zoned": false, 00:07:51.593 "supported_io_types": { 00:07:51.593 "read": true, 00:07:51.593 "write": true, 00:07:51.593 "unmap": true, 00:07:51.593 "flush": true, 00:07:51.593 "reset": true, 00:07:51.593 "nvme_admin": false, 00:07:51.593 "nvme_io": false, 00:07:51.593 "nvme_io_md": false, 00:07:51.593 "write_zeroes": true, 00:07:51.593 "zcopy": false, 00:07:51.593 "get_zone_info": false, 00:07:51.593 "zone_management": false, 00:07:51.593 "zone_append": false, 00:07:51.593 "compare": false, 00:07:51.593 "compare_and_write": false, 00:07:51.593 "abort": false, 00:07:51.593 "seek_hole": false, 00:07:51.593 "seek_data": false, 00:07:51.593 "copy": false, 00:07:51.593 "nvme_iov_md": false 00:07:51.593 }, 00:07:51.593 "memory_domains": [ 00:07:51.593 { 00:07:51.593 "dma_device_id": "system", 00:07:51.593 "dma_device_type": 1 00:07:51.593 }, 00:07:51.593 { 00:07:51.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.593 "dma_device_type": 2 00:07:51.593 }, 00:07:51.593 { 00:07:51.593 "dma_device_id": "system", 00:07:51.593 "dma_device_type": 1 00:07:51.593 }, 00:07:51.593 { 00:07:51.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.593 "dma_device_type": 2 00:07:51.593 } 00:07:51.593 ], 00:07:51.593 "driver_specific": { 00:07:51.593 "raid": { 00:07:51.593 "uuid": "cd1fee81-6a9b-4695-b289-cfdce28b97e3", 00:07:51.593 "strip_size_kb": 64, 00:07:51.593 "state": "online", 00:07:51.593 "raid_level": "concat", 00:07:51.593 "superblock": true, 00:07:51.593 "num_base_bdevs": 2, 00:07:51.593 "num_base_bdevs_discovered": 2, 00:07:51.593 "num_base_bdevs_operational": 2, 00:07:51.593 "base_bdevs_list": [ 00:07:51.593 { 00:07:51.593 "name": "pt1", 00:07:51.593 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:51.593 "is_configured": true, 00:07:51.593 "data_offset": 2048, 00:07:51.593 "data_size": 63488 00:07:51.593 }, 00:07:51.593 { 00:07:51.593 "name": "pt2", 00:07:51.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:51.593 "is_configured": true, 00:07:51.593 "data_offset": 2048, 00:07:51.593 "data_size": 63488 00:07:51.593 } 00:07:51.593 ] 00:07:51.593 } 00:07:51.593 } 00:07:51.593 }' 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:51.593 pt2' 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.593 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.854 [2024-12-07 02:41:02.721416] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cd1fee81-6a9b-4695-b289-cfdce28b97e3 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cd1fee81-6a9b-4695-b289-cfdce28b97e3 ']' 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.854 [2024-12-07 02:41:02.765114] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:51.854 [2024-12-07 02:41:02.765179] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.854 [2024-12-07 02:41:02.765276] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.854 [2024-12-07 02:41:02.765343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.854 [2024-12-07 02:41:02.765412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.854 [2024-12-07 02:41:02.880967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:51.854 [2024-12-07 02:41:02.883081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:51.854 [2024-12-07 02:41:02.883185] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:51.854 [2024-12-07 02:41:02.883261] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:51.854 [2024-12-07 02:41:02.883320] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:51.854 [2024-12-07 02:41:02.883355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:07:51.854 request: 00:07:51.854 { 00:07:51.854 "name": "raid_bdev1", 00:07:51.854 "raid_level": "concat", 00:07:51.854 "base_bdevs": [ 00:07:51.854 "malloc1", 00:07:51.854 "malloc2" 00:07:51.854 ], 00:07:51.854 "strip_size_kb": 64, 00:07:51.854 "superblock": false, 00:07:51.854 "method": "bdev_raid_create", 00:07:51.854 "req_id": 1 00:07:51.854 } 00:07:51.854 Got JSON-RPC error response 00:07:51.854 response: 00:07:51.854 { 00:07:51.854 "code": -17, 00:07:51.854 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:51.854 } 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.854 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.114 [2024-12-07 02:41:02.944820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:52.114 [2024-12-07 02:41:02.944896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.114 [2024-12-07 02:41:02.944932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:52.114 [2024-12-07 02:41:02.944959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.114 [2024-12-07 02:41:02.947302] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.114 [2024-12-07 02:41:02.947362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:52.114 [2024-12-07 02:41:02.947444] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:52.114 [2024-12-07 02:41:02.947496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:52.114 pt1 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.114 02:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.114 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.114 "name": "raid_bdev1", 00:07:52.114 "uuid": "cd1fee81-6a9b-4695-b289-cfdce28b97e3", 00:07:52.115 "strip_size_kb": 64, 00:07:52.115 "state": "configuring", 00:07:52.115 "raid_level": "concat", 00:07:52.115 "superblock": true, 00:07:52.115 "num_base_bdevs": 2, 00:07:52.115 "num_base_bdevs_discovered": 1, 00:07:52.115 "num_base_bdevs_operational": 2, 00:07:52.115 "base_bdevs_list": [ 00:07:52.115 { 00:07:52.115 "name": "pt1", 00:07:52.115 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:52.115 "is_configured": true, 00:07:52.115 "data_offset": 2048, 00:07:52.115 "data_size": 63488 00:07:52.115 }, 00:07:52.115 { 00:07:52.115 "name": null, 00:07:52.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.115 "is_configured": false, 00:07:52.115 "data_offset": 2048, 00:07:52.115 "data_size": 63488 00:07:52.115 } 00:07:52.115 ] 00:07:52.115 }' 00:07:52.115 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.115 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.375 [2024-12-07 02:41:03.420054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:52.375 [2024-12-07 02:41:03.420137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:52.375 [2024-12-07 02:41:03.420168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:52.375 [2024-12-07 02:41:03.420178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:52.375 [2024-12-07 02:41:03.420710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:52.375 [2024-12-07 02:41:03.420729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:52.375 [2024-12-07 02:41:03.420822] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:52.375 [2024-12-07 02:41:03.420848] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:52.375 [2024-12-07 02:41:03.420961] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:52.375 [2024-12-07 02:41:03.420977] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:52.375 [2024-12-07 02:41:03.421238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:07:52.375 [2024-12-07 02:41:03.421361] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:52.375 [2024-12-07 02:41:03.421385] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:52.375 [2024-12-07 02:41:03.421493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.375 pt2 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.375 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.634 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.634 "name": "raid_bdev1", 00:07:52.634 "uuid": "cd1fee81-6a9b-4695-b289-cfdce28b97e3", 00:07:52.634 "strip_size_kb": 64, 00:07:52.634 "state": "online", 00:07:52.634 "raid_level": "concat", 00:07:52.634 "superblock": true, 00:07:52.634 "num_base_bdevs": 2, 00:07:52.634 "num_base_bdevs_discovered": 2, 00:07:52.634 "num_base_bdevs_operational": 2, 00:07:52.634 "base_bdevs_list": [ 00:07:52.634 { 00:07:52.634 "name": "pt1", 00:07:52.634 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:52.635 "is_configured": true, 00:07:52.635 "data_offset": 2048, 00:07:52.635 "data_size": 63488 00:07:52.635 }, 00:07:52.635 { 00:07:52.635 "name": "pt2", 00:07:52.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.635 "is_configured": true, 00:07:52.635 "data_offset": 2048, 00:07:52.635 "data_size": 63488 00:07:52.635 } 00:07:52.635 ] 00:07:52.635 }' 00:07:52.635 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.635 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.894 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:52.894 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:52.894 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.894 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.894 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.894 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.894 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.894 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:52.894 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.894 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.894 [2024-12-07 02:41:03.855984] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.894 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.894 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.894 "name": "raid_bdev1", 00:07:52.894 "aliases": [ 00:07:52.894 "cd1fee81-6a9b-4695-b289-cfdce28b97e3" 00:07:52.894 ], 00:07:52.894 "product_name": "Raid Volume", 00:07:52.894 "block_size": 512, 00:07:52.894 "num_blocks": 126976, 00:07:52.894 "uuid": "cd1fee81-6a9b-4695-b289-cfdce28b97e3", 00:07:52.894 "assigned_rate_limits": { 00:07:52.894 "rw_ios_per_sec": 0, 00:07:52.894 "rw_mbytes_per_sec": 0, 00:07:52.894 "r_mbytes_per_sec": 0, 00:07:52.894 "w_mbytes_per_sec": 0 00:07:52.894 }, 00:07:52.894 "claimed": false, 00:07:52.894 "zoned": false, 00:07:52.894 "supported_io_types": { 00:07:52.894 "read": true, 00:07:52.894 "write": true, 00:07:52.894 "unmap": true, 00:07:52.894 "flush": true, 00:07:52.894 "reset": true, 00:07:52.894 "nvme_admin": false, 00:07:52.894 "nvme_io": false, 00:07:52.894 "nvme_io_md": false, 00:07:52.894 "write_zeroes": true, 00:07:52.894 "zcopy": false, 00:07:52.894 "get_zone_info": false, 00:07:52.894 "zone_management": false, 00:07:52.894 "zone_append": false, 00:07:52.894 "compare": false, 00:07:52.894 "compare_and_write": false, 00:07:52.894 "abort": false, 00:07:52.894 "seek_hole": false, 00:07:52.894 "seek_data": false, 00:07:52.894 "copy": false, 00:07:52.894 "nvme_iov_md": false 00:07:52.894 }, 00:07:52.894 "memory_domains": [ 00:07:52.894 { 00:07:52.894 "dma_device_id": "system", 00:07:52.894 "dma_device_type": 1 00:07:52.894 }, 00:07:52.894 { 00:07:52.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.894 "dma_device_type": 2 00:07:52.894 }, 00:07:52.894 { 00:07:52.894 "dma_device_id": "system", 00:07:52.894 "dma_device_type": 1 00:07:52.894 }, 00:07:52.894 { 00:07:52.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.894 "dma_device_type": 2 00:07:52.894 } 00:07:52.894 ], 00:07:52.894 "driver_specific": { 00:07:52.894 "raid": { 00:07:52.894 "uuid": "cd1fee81-6a9b-4695-b289-cfdce28b97e3", 00:07:52.894 "strip_size_kb": 64, 00:07:52.895 "state": "online", 00:07:52.895 "raid_level": "concat", 00:07:52.895 "superblock": true, 00:07:52.895 "num_base_bdevs": 2, 00:07:52.895 "num_base_bdevs_discovered": 2, 00:07:52.895 "num_base_bdevs_operational": 2, 00:07:52.895 "base_bdevs_list": [ 00:07:52.895 { 00:07:52.895 "name": "pt1", 00:07:52.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:52.895 "is_configured": true, 00:07:52.895 "data_offset": 2048, 00:07:52.895 "data_size": 63488 00:07:52.895 }, 00:07:52.895 { 00:07:52.895 "name": "pt2", 00:07:52.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:52.895 "is_configured": true, 00:07:52.895 "data_offset": 2048, 00:07:52.895 "data_size": 63488 00:07:52.895 } 00:07:52.895 ] 00:07:52.895 } 00:07:52.895 } 00:07:52.895 }' 00:07:52.895 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.895 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:52.895 pt2' 00:07:52.895 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.895 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.895 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.895 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:52.895 02:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.895 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.895 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.155 02:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.155 [2024-12-07 02:41:04.067677] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cd1fee81-6a9b-4695-b289-cfdce28b97e3 '!=' cd1fee81-6a9b-4695-b289-cfdce28b97e3 ']' 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73703 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73703 ']' 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73703 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73703 00:07:53.155 killing process with pid 73703 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73703' 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73703 00:07:53.155 [2024-12-07 02:41:04.133484] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.155 [2024-12-07 02:41:04.133568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.155 [2024-12-07 02:41:04.133630] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.155 [2024-12-07 02:41:04.133640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:53.155 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73703 00:07:53.155 [2024-12-07 02:41:04.175905] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:53.727 02:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:53.727 ************************************ 00:07:53.727 END TEST raid_superblock_test 00:07:53.727 ************************************ 00:07:53.727 00:07:53.727 real 0m3.509s 00:07:53.727 user 0m5.249s 00:07:53.727 sys 0m0.798s 00:07:53.727 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.727 02:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.727 02:41:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:53.727 02:41:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:53.727 02:41:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.727 02:41:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.727 ************************************ 00:07:53.727 START TEST raid_read_error_test 00:07:53.727 ************************************ 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aqDfYbWuV7 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73904 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73904 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73904 ']' 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.727 02:41:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.727 [2024-12-07 02:41:04.723165] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:53.728 [2024-12-07 02:41:04.723430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73904 ] 00:07:53.988 [2024-12-07 02:41:04.873055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.988 [2024-12-07 02:41:04.946639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.988 [2024-12-07 02:41:05.024387] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.988 [2024-12-07 02:41:05.024522] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.556 BaseBdev1_malloc 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.556 true 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.556 [2024-12-07 02:41:05.587519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:54.556 [2024-12-07 02:41:05.587578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.556 [2024-12-07 02:41:05.587638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:54.556 [2024-12-07 02:41:05.587648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.556 [2024-12-07 02:41:05.590055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.556 [2024-12-07 02:41:05.590098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:54.556 BaseBdev1 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.556 BaseBdev2_malloc 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.556 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.814 true 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.814 [2024-12-07 02:41:05.650187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:54.814 [2024-12-07 02:41:05.650245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.814 [2024-12-07 02:41:05.650271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:54.814 [2024-12-07 02:41:05.650282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.814 [2024-12-07 02:41:05.652975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.814 [2024-12-07 02:41:05.653014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:54.814 BaseBdev2 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.814 [2024-12-07 02:41:05.662197] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.814 [2024-12-07 02:41:05.664273] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.814 [2024-12-07 02:41:05.664462] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:54.814 [2024-12-07 02:41:05.664475] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.814 [2024-12-07 02:41:05.664751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:54.814 [2024-12-07 02:41:05.664883] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:54.814 [2024-12-07 02:41:05.664896] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:54.814 [2024-12-07 02:41:05.665018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.814 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.814 "name": "raid_bdev1", 00:07:54.814 "uuid": "97d63582-777b-4607-9aba-4bbbf2ba0303", 00:07:54.814 "strip_size_kb": 64, 00:07:54.814 "state": "online", 00:07:54.814 "raid_level": "concat", 00:07:54.814 "superblock": true, 00:07:54.814 "num_base_bdevs": 2, 00:07:54.814 "num_base_bdevs_discovered": 2, 00:07:54.814 "num_base_bdevs_operational": 2, 00:07:54.814 "base_bdevs_list": [ 00:07:54.814 { 00:07:54.814 "name": "BaseBdev1", 00:07:54.814 "uuid": "b0a43c19-420f-5a0e-8e3f-65a1f1e2446f", 00:07:54.814 "is_configured": true, 00:07:54.814 "data_offset": 2048, 00:07:54.814 "data_size": 63488 00:07:54.814 }, 00:07:54.814 { 00:07:54.814 "name": "BaseBdev2", 00:07:54.814 "uuid": "e501488a-562c-538c-9e28-83eb4bbf7361", 00:07:54.814 "is_configured": true, 00:07:54.814 "data_offset": 2048, 00:07:54.815 "data_size": 63488 00:07:54.815 } 00:07:54.815 ] 00:07:54.815 }' 00:07:54.815 02:41:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.815 02:41:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.073 02:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:55.073 02:41:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:55.332 [2024-12-07 02:41:06.173696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.269 "name": "raid_bdev1", 00:07:56.269 "uuid": "97d63582-777b-4607-9aba-4bbbf2ba0303", 00:07:56.269 "strip_size_kb": 64, 00:07:56.269 "state": "online", 00:07:56.269 "raid_level": "concat", 00:07:56.269 "superblock": true, 00:07:56.269 "num_base_bdevs": 2, 00:07:56.269 "num_base_bdevs_discovered": 2, 00:07:56.269 "num_base_bdevs_operational": 2, 00:07:56.269 "base_bdevs_list": [ 00:07:56.269 { 00:07:56.269 "name": "BaseBdev1", 00:07:56.269 "uuid": "b0a43c19-420f-5a0e-8e3f-65a1f1e2446f", 00:07:56.269 "is_configured": true, 00:07:56.269 "data_offset": 2048, 00:07:56.269 "data_size": 63488 00:07:56.269 }, 00:07:56.269 { 00:07:56.269 "name": "BaseBdev2", 00:07:56.269 "uuid": "e501488a-562c-538c-9e28-83eb4bbf7361", 00:07:56.269 "is_configured": true, 00:07:56.269 "data_offset": 2048, 00:07:56.269 "data_size": 63488 00:07:56.269 } 00:07:56.269 ] 00:07:56.269 }' 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.269 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.529 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.529 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.529 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.529 [2024-12-07 02:41:07.561741] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.529 [2024-12-07 02:41:07.561783] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.529 [2024-12-07 02:41:07.564273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.529 [2024-12-07 02:41:07.564322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.529 [2024-12-07 02:41:07.564363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.529 [2024-12-07 02:41:07.564381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:07:56.529 { 00:07:56.529 "results": [ 00:07:56.529 { 00:07:56.529 "job": "raid_bdev1", 00:07:56.529 "core_mask": "0x1", 00:07:56.529 "workload": "randrw", 00:07:56.529 "percentage": 50, 00:07:56.529 "status": "finished", 00:07:56.529 "queue_depth": 1, 00:07:56.529 "io_size": 131072, 00:07:56.529 "runtime": 1.388778, 00:07:56.529 "iops": 16032.800058756691, 00:07:56.529 "mibps": 2004.1000073445864, 00:07:56.529 "io_failed": 1, 00:07:56.529 "io_timeout": 0, 00:07:56.529 "avg_latency_us": 87.23652958938395, 00:07:56.529 "min_latency_us": 24.370305676855896, 00:07:56.529 "max_latency_us": 1395.1441048034935 00:07:56.529 } 00:07:56.529 ], 00:07:56.529 "core_count": 1 00:07:56.529 } 00:07:56.529 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.529 02:41:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73904 00:07:56.529 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73904 ']' 00:07:56.529 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73904 00:07:56.529 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:56.529 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.529 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73904 00:07:56.789 killing process with pid 73904 00:07:56.789 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.789 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.789 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73904' 00:07:56.789 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73904 00:07:56.789 [2024-12-07 02:41:07.614029] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.789 02:41:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73904 00:07:56.789 [2024-12-07 02:41:07.642513] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.050 02:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aqDfYbWuV7 00:07:57.050 02:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:57.050 02:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:57.050 02:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:57.050 02:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:57.050 02:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.050 02:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:57.050 ************************************ 00:07:57.050 END TEST raid_read_error_test 00:07:57.050 ************************************ 00:07:57.050 02:41:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:57.050 00:07:57.050 real 0m3.404s 00:07:57.050 user 0m4.147s 00:07:57.050 sys 0m0.618s 00:07:57.050 02:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.050 02:41:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.050 02:41:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:57.050 02:41:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:57.050 02:41:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.050 02:41:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.050 ************************************ 00:07:57.050 START TEST raid_write_error_test 00:07:57.050 ************************************ 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XTiTwsWAMx 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74033 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74033 00:07:57.050 02:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74033 ']' 00:07:57.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.310 02:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.310 02:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.310 02:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.310 02:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.310 02:41:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.310 [2024-12-07 02:41:08.212009] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:07:57.310 [2024-12-07 02:41:08.212161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74033 ] 00:07:57.310 [2024-12-07 02:41:08.374660] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.570 [2024-12-07 02:41:08.445630] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.570 [2024-12-07 02:41:08.522149] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.570 [2024-12-07 02:41:08.522263] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.140 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.140 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:58.140 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.140 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:58.140 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.140 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.140 BaseBdev1_malloc 00:07:58.140 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.140 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:58.140 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.140 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.140 true 00:07:58.140 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.140 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:58.140 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.141 [2024-12-07 02:41:09.059867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:58.141 [2024-12-07 02:41:09.059987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.141 [2024-12-07 02:41:09.060011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:58.141 [2024-12-07 02:41:09.060028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.141 [2024-12-07 02:41:09.062372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.141 [2024-12-07 02:41:09.062408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:58.141 BaseBdev1 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.141 BaseBdev2_malloc 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.141 true 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.141 [2024-12-07 02:41:09.122950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:58.141 [2024-12-07 02:41:09.123018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.141 [2024-12-07 02:41:09.123048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:58.141 [2024-12-07 02:41:09.123062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.141 [2024-12-07 02:41:09.126544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.141 BaseBdev2 00:07:58.141 [2024-12-07 02:41:09.126658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.141 [2024-12-07 02:41:09.134939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.141 [2024-12-07 02:41:09.137161] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.141 [2024-12-07 02:41:09.137340] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:07:58.141 [2024-12-07 02:41:09.137352] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.141 [2024-12-07 02:41:09.137606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:58.141 [2024-12-07 02:41:09.137762] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:07:58.141 [2024-12-07 02:41:09.137775] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:07:58.141 [2024-12-07 02:41:09.137892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.141 "name": "raid_bdev1", 00:07:58.141 "uuid": "09fc101c-17d4-4948-9d38-df67c93e8ead", 00:07:58.141 "strip_size_kb": 64, 00:07:58.141 "state": "online", 00:07:58.141 "raid_level": "concat", 00:07:58.141 "superblock": true, 00:07:58.141 "num_base_bdevs": 2, 00:07:58.141 "num_base_bdevs_discovered": 2, 00:07:58.141 "num_base_bdevs_operational": 2, 00:07:58.141 "base_bdevs_list": [ 00:07:58.141 { 00:07:58.141 "name": "BaseBdev1", 00:07:58.141 "uuid": "091c08b3-7d07-5ce1-ab15-8521395f3903", 00:07:58.141 "is_configured": true, 00:07:58.141 "data_offset": 2048, 00:07:58.141 "data_size": 63488 00:07:58.141 }, 00:07:58.141 { 00:07:58.141 "name": "BaseBdev2", 00:07:58.141 "uuid": "f6fcfca9-ba1d-5f66-be94-8c6de09bd623", 00:07:58.141 "is_configured": true, 00:07:58.141 "data_offset": 2048, 00:07:58.141 "data_size": 63488 00:07:58.141 } 00:07:58.141 ] 00:07:58.141 }' 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.141 02:41:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.709 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:58.709 02:41:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:58.709 [2024-12-07 02:41:09.610587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.647 "name": "raid_bdev1", 00:07:59.647 "uuid": "09fc101c-17d4-4948-9d38-df67c93e8ead", 00:07:59.647 "strip_size_kb": 64, 00:07:59.647 "state": "online", 00:07:59.647 "raid_level": "concat", 00:07:59.647 "superblock": true, 00:07:59.647 "num_base_bdevs": 2, 00:07:59.647 "num_base_bdevs_discovered": 2, 00:07:59.647 "num_base_bdevs_operational": 2, 00:07:59.647 "base_bdevs_list": [ 00:07:59.647 { 00:07:59.647 "name": "BaseBdev1", 00:07:59.647 "uuid": "091c08b3-7d07-5ce1-ab15-8521395f3903", 00:07:59.647 "is_configured": true, 00:07:59.647 "data_offset": 2048, 00:07:59.647 "data_size": 63488 00:07:59.647 }, 00:07:59.647 { 00:07:59.647 "name": "BaseBdev2", 00:07:59.647 "uuid": "f6fcfca9-ba1d-5f66-be94-8c6de09bd623", 00:07:59.647 "is_configured": true, 00:07:59.647 "data_offset": 2048, 00:07:59.647 "data_size": 63488 00:07:59.647 } 00:07:59.647 ] 00:07:59.647 }' 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.647 02:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.218 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:00.218 02:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.218 02:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.218 [2024-12-07 02:41:10.990692] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:00.218 [2024-12-07 02:41:10.990813] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:00.218 [2024-12-07 02:41:10.993333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.218 [2024-12-07 02:41:10.993426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.218 [2024-12-07 02:41:10.993484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.218 [2024-12-07 02:41:10.993522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:00.218 { 00:08:00.218 "results": [ 00:08:00.218 { 00:08:00.218 "job": "raid_bdev1", 00:08:00.218 "core_mask": "0x1", 00:08:00.218 "workload": "randrw", 00:08:00.218 "percentage": 50, 00:08:00.218 "status": "finished", 00:08:00.218 "queue_depth": 1, 00:08:00.218 "io_size": 131072, 00:08:00.218 "runtime": 1.380757, 00:08:00.218 "iops": 16058.582357359042, 00:08:00.218 "mibps": 2007.3227946698803, 00:08:00.218 "io_failed": 1, 00:08:00.218 "io_timeout": 0, 00:08:00.218 "avg_latency_us": 87.078694076189, 00:08:00.218 "min_latency_us": 24.370305676855896, 00:08:00.218 "max_latency_us": 1430.9170305676855 00:08:00.218 } 00:08:00.218 ], 00:08:00.218 "core_count": 1 00:08:00.218 } 00:08:00.218 02:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.218 02:41:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74033 00:08:00.218 02:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74033 ']' 00:08:00.218 02:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74033 00:08:00.218 02:41:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:00.218 02:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.218 02:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74033 00:08:00.218 02:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:00.218 02:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:00.218 02:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74033' 00:08:00.218 killing process with pid 74033 00:08:00.218 02:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74033 00:08:00.218 [2024-12-07 02:41:11.033027] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.218 02:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74033 00:08:00.218 [2024-12-07 02:41:11.062123] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.478 02:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XTiTwsWAMx 00:08:00.478 02:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:00.478 02:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:00.478 02:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:00.478 02:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:00.478 02:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.478 02:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:00.478 02:41:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:00.478 00:08:00.478 real 0m3.337s 00:08:00.478 user 0m4.008s 00:08:00.478 sys 0m0.626s 00:08:00.478 02:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.478 ************************************ 00:08:00.478 END TEST raid_write_error_test 00:08:00.478 ************************************ 00:08:00.478 02:41:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.479 02:41:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:00.479 02:41:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:00.479 02:41:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:00.479 02:41:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.479 02:41:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.479 ************************************ 00:08:00.479 START TEST raid_state_function_test 00:08:00.479 ************************************ 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74160 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74160' 00:08:00.479 Process raid pid: 74160 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74160 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74160 ']' 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.479 02:41:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.739 [2024-12-07 02:41:11.604878] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:00.739 [2024-12-07 02:41:11.605006] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.739 [2024-12-07 02:41:11.765641] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.002 [2024-12-07 02:41:11.835238] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.002 [2024-12-07 02:41:11.910670] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.002 [2024-12-07 02:41:11.910720] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.599 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.600 [2024-12-07 02:41:12.441429] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.600 [2024-12-07 02:41:12.441488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.600 [2024-12-07 02:41:12.441502] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.600 [2024-12-07 02:41:12.441512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.600 "name": "Existed_Raid", 00:08:01.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.600 "strip_size_kb": 0, 00:08:01.600 "state": "configuring", 00:08:01.600 "raid_level": "raid1", 00:08:01.600 "superblock": false, 00:08:01.600 "num_base_bdevs": 2, 00:08:01.600 "num_base_bdevs_discovered": 0, 00:08:01.600 "num_base_bdevs_operational": 2, 00:08:01.600 "base_bdevs_list": [ 00:08:01.600 { 00:08:01.600 "name": "BaseBdev1", 00:08:01.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.600 "is_configured": false, 00:08:01.600 "data_offset": 0, 00:08:01.600 "data_size": 0 00:08:01.600 }, 00:08:01.600 { 00:08:01.600 "name": "BaseBdev2", 00:08:01.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.600 "is_configured": false, 00:08:01.600 "data_offset": 0, 00:08:01.600 "data_size": 0 00:08:01.600 } 00:08:01.600 ] 00:08:01.600 }' 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.600 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.860 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:01.860 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.860 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.860 [2024-12-07 02:41:12.896579] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.860 [2024-12-07 02:41:12.896688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:01.860 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.860 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.860 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.860 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.860 [2024-12-07 02:41:12.908616] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.860 [2024-12-07 02:41:12.908696] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.860 [2024-12-07 02:41:12.908726] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.860 [2024-12-07 02:41:12.908751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.860 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.860 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:01.860 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.860 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.860 [2024-12-07 02:41:12.935559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.119 BaseBdev1 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.119 [ 00:08:02.119 { 00:08:02.119 "name": "BaseBdev1", 00:08:02.119 "aliases": [ 00:08:02.119 "a9942cd7-0809-44b9-bf85-0b56226ff993" 00:08:02.119 ], 00:08:02.119 "product_name": "Malloc disk", 00:08:02.119 "block_size": 512, 00:08:02.119 "num_blocks": 65536, 00:08:02.119 "uuid": "a9942cd7-0809-44b9-bf85-0b56226ff993", 00:08:02.119 "assigned_rate_limits": { 00:08:02.119 "rw_ios_per_sec": 0, 00:08:02.119 "rw_mbytes_per_sec": 0, 00:08:02.119 "r_mbytes_per_sec": 0, 00:08:02.119 "w_mbytes_per_sec": 0 00:08:02.119 }, 00:08:02.119 "claimed": true, 00:08:02.119 "claim_type": "exclusive_write", 00:08:02.119 "zoned": false, 00:08:02.119 "supported_io_types": { 00:08:02.119 "read": true, 00:08:02.119 "write": true, 00:08:02.119 "unmap": true, 00:08:02.119 "flush": true, 00:08:02.119 "reset": true, 00:08:02.119 "nvme_admin": false, 00:08:02.119 "nvme_io": false, 00:08:02.119 "nvme_io_md": false, 00:08:02.119 "write_zeroes": true, 00:08:02.119 "zcopy": true, 00:08:02.119 "get_zone_info": false, 00:08:02.119 "zone_management": false, 00:08:02.119 "zone_append": false, 00:08:02.119 "compare": false, 00:08:02.119 "compare_and_write": false, 00:08:02.119 "abort": true, 00:08:02.119 "seek_hole": false, 00:08:02.119 "seek_data": false, 00:08:02.119 "copy": true, 00:08:02.119 "nvme_iov_md": false 00:08:02.119 }, 00:08:02.119 "memory_domains": [ 00:08:02.119 { 00:08:02.119 "dma_device_id": "system", 00:08:02.119 "dma_device_type": 1 00:08:02.119 }, 00:08:02.119 { 00:08:02.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.119 "dma_device_type": 2 00:08:02.119 } 00:08:02.119 ], 00:08:02.119 "driver_specific": {} 00:08:02.119 } 00:08:02.119 ] 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.119 02:41:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.119 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.119 "name": "Existed_Raid", 00:08:02.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.119 "strip_size_kb": 0, 00:08:02.119 "state": "configuring", 00:08:02.119 "raid_level": "raid1", 00:08:02.119 "superblock": false, 00:08:02.119 "num_base_bdevs": 2, 00:08:02.119 "num_base_bdevs_discovered": 1, 00:08:02.120 "num_base_bdevs_operational": 2, 00:08:02.120 "base_bdevs_list": [ 00:08:02.120 { 00:08:02.120 "name": "BaseBdev1", 00:08:02.120 "uuid": "a9942cd7-0809-44b9-bf85-0b56226ff993", 00:08:02.120 "is_configured": true, 00:08:02.120 "data_offset": 0, 00:08:02.120 "data_size": 65536 00:08:02.120 }, 00:08:02.120 { 00:08:02.120 "name": "BaseBdev2", 00:08:02.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.120 "is_configured": false, 00:08:02.120 "data_offset": 0, 00:08:02.120 "data_size": 0 00:08:02.120 } 00:08:02.120 ] 00:08:02.120 }' 00:08:02.120 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.120 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.388 [2024-12-07 02:41:13.418702] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.388 [2024-12-07 02:41:13.418795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.388 [2024-12-07 02:41:13.430727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.388 [2024-12-07 02:41:13.432858] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:02.388 [2024-12-07 02:41:13.432929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.388 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.650 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.650 "name": "Existed_Raid", 00:08:02.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.650 "strip_size_kb": 0, 00:08:02.650 "state": "configuring", 00:08:02.650 "raid_level": "raid1", 00:08:02.650 "superblock": false, 00:08:02.650 "num_base_bdevs": 2, 00:08:02.650 "num_base_bdevs_discovered": 1, 00:08:02.650 "num_base_bdevs_operational": 2, 00:08:02.650 "base_bdevs_list": [ 00:08:02.650 { 00:08:02.650 "name": "BaseBdev1", 00:08:02.650 "uuid": "a9942cd7-0809-44b9-bf85-0b56226ff993", 00:08:02.650 "is_configured": true, 00:08:02.650 "data_offset": 0, 00:08:02.650 "data_size": 65536 00:08:02.650 }, 00:08:02.650 { 00:08:02.650 "name": "BaseBdev2", 00:08:02.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.650 "is_configured": false, 00:08:02.650 "data_offset": 0, 00:08:02.650 "data_size": 0 00:08:02.650 } 00:08:02.650 ] 00:08:02.650 }' 00:08:02.650 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.650 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.908 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:02.908 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.908 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.908 [2024-12-07 02:41:13.895982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.908 [2024-12-07 02:41:13.896039] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:02.908 [2024-12-07 02:41:13.896049] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:02.908 [2024-12-07 02:41:13.896408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:02.908 [2024-12-07 02:41:13.896601] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:02.909 [2024-12-07 02:41:13.896628] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:02.909 [2024-12-07 02:41:13.896882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.909 BaseBdev2 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.909 [ 00:08:02.909 { 00:08:02.909 "name": "BaseBdev2", 00:08:02.909 "aliases": [ 00:08:02.909 "ee68cf96-9827-4926-8897-1a236df0d799" 00:08:02.909 ], 00:08:02.909 "product_name": "Malloc disk", 00:08:02.909 "block_size": 512, 00:08:02.909 "num_blocks": 65536, 00:08:02.909 "uuid": "ee68cf96-9827-4926-8897-1a236df0d799", 00:08:02.909 "assigned_rate_limits": { 00:08:02.909 "rw_ios_per_sec": 0, 00:08:02.909 "rw_mbytes_per_sec": 0, 00:08:02.909 "r_mbytes_per_sec": 0, 00:08:02.909 "w_mbytes_per_sec": 0 00:08:02.909 }, 00:08:02.909 "claimed": true, 00:08:02.909 "claim_type": "exclusive_write", 00:08:02.909 "zoned": false, 00:08:02.909 "supported_io_types": { 00:08:02.909 "read": true, 00:08:02.909 "write": true, 00:08:02.909 "unmap": true, 00:08:02.909 "flush": true, 00:08:02.909 "reset": true, 00:08:02.909 "nvme_admin": false, 00:08:02.909 "nvme_io": false, 00:08:02.909 "nvme_io_md": false, 00:08:02.909 "write_zeroes": true, 00:08:02.909 "zcopy": true, 00:08:02.909 "get_zone_info": false, 00:08:02.909 "zone_management": false, 00:08:02.909 "zone_append": false, 00:08:02.909 "compare": false, 00:08:02.909 "compare_and_write": false, 00:08:02.909 "abort": true, 00:08:02.909 "seek_hole": false, 00:08:02.909 "seek_data": false, 00:08:02.909 "copy": true, 00:08:02.909 "nvme_iov_md": false 00:08:02.909 }, 00:08:02.909 "memory_domains": [ 00:08:02.909 { 00:08:02.909 "dma_device_id": "system", 00:08:02.909 "dma_device_type": 1 00:08:02.909 }, 00:08:02.909 { 00:08:02.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.909 "dma_device_type": 2 00:08:02.909 } 00:08:02.909 ], 00:08:02.909 "driver_specific": {} 00:08:02.909 } 00:08:02.909 ] 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.909 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.168 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.168 "name": "Existed_Raid", 00:08:03.168 "uuid": "64ae096a-bde9-4663-91db-1bb992c31299", 00:08:03.168 "strip_size_kb": 0, 00:08:03.168 "state": "online", 00:08:03.168 "raid_level": "raid1", 00:08:03.168 "superblock": false, 00:08:03.168 "num_base_bdevs": 2, 00:08:03.168 "num_base_bdevs_discovered": 2, 00:08:03.168 "num_base_bdevs_operational": 2, 00:08:03.168 "base_bdevs_list": [ 00:08:03.168 { 00:08:03.168 "name": "BaseBdev1", 00:08:03.168 "uuid": "a9942cd7-0809-44b9-bf85-0b56226ff993", 00:08:03.168 "is_configured": true, 00:08:03.168 "data_offset": 0, 00:08:03.168 "data_size": 65536 00:08:03.168 }, 00:08:03.168 { 00:08:03.168 "name": "BaseBdev2", 00:08:03.168 "uuid": "ee68cf96-9827-4926-8897-1a236df0d799", 00:08:03.168 "is_configured": true, 00:08:03.168 "data_offset": 0, 00:08:03.168 "data_size": 65536 00:08:03.168 } 00:08:03.168 ] 00:08:03.168 }' 00:08:03.168 02:41:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.168 02:41:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.428 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:03.428 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:03.428 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.428 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.428 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.428 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.428 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:03.428 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.428 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.428 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.428 [2024-12-07 02:41:14.383950] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.428 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.428 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.428 "name": "Existed_Raid", 00:08:03.428 "aliases": [ 00:08:03.428 "64ae096a-bde9-4663-91db-1bb992c31299" 00:08:03.428 ], 00:08:03.428 "product_name": "Raid Volume", 00:08:03.428 "block_size": 512, 00:08:03.428 "num_blocks": 65536, 00:08:03.428 "uuid": "64ae096a-bde9-4663-91db-1bb992c31299", 00:08:03.428 "assigned_rate_limits": { 00:08:03.428 "rw_ios_per_sec": 0, 00:08:03.428 "rw_mbytes_per_sec": 0, 00:08:03.428 "r_mbytes_per_sec": 0, 00:08:03.428 "w_mbytes_per_sec": 0 00:08:03.428 }, 00:08:03.428 "claimed": false, 00:08:03.428 "zoned": false, 00:08:03.428 "supported_io_types": { 00:08:03.428 "read": true, 00:08:03.428 "write": true, 00:08:03.428 "unmap": false, 00:08:03.428 "flush": false, 00:08:03.428 "reset": true, 00:08:03.428 "nvme_admin": false, 00:08:03.428 "nvme_io": false, 00:08:03.428 "nvme_io_md": false, 00:08:03.428 "write_zeroes": true, 00:08:03.428 "zcopy": false, 00:08:03.428 "get_zone_info": false, 00:08:03.428 "zone_management": false, 00:08:03.428 "zone_append": false, 00:08:03.428 "compare": false, 00:08:03.428 "compare_and_write": false, 00:08:03.428 "abort": false, 00:08:03.428 "seek_hole": false, 00:08:03.428 "seek_data": false, 00:08:03.428 "copy": false, 00:08:03.428 "nvme_iov_md": false 00:08:03.428 }, 00:08:03.428 "memory_domains": [ 00:08:03.428 { 00:08:03.428 "dma_device_id": "system", 00:08:03.428 "dma_device_type": 1 00:08:03.428 }, 00:08:03.428 { 00:08:03.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.428 "dma_device_type": 2 00:08:03.428 }, 00:08:03.428 { 00:08:03.428 "dma_device_id": "system", 00:08:03.428 "dma_device_type": 1 00:08:03.428 }, 00:08:03.428 { 00:08:03.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.428 "dma_device_type": 2 00:08:03.428 } 00:08:03.428 ], 00:08:03.428 "driver_specific": { 00:08:03.428 "raid": { 00:08:03.428 "uuid": "64ae096a-bde9-4663-91db-1bb992c31299", 00:08:03.428 "strip_size_kb": 0, 00:08:03.429 "state": "online", 00:08:03.429 "raid_level": "raid1", 00:08:03.429 "superblock": false, 00:08:03.429 "num_base_bdevs": 2, 00:08:03.429 "num_base_bdevs_discovered": 2, 00:08:03.429 "num_base_bdevs_operational": 2, 00:08:03.429 "base_bdevs_list": [ 00:08:03.429 { 00:08:03.429 "name": "BaseBdev1", 00:08:03.429 "uuid": "a9942cd7-0809-44b9-bf85-0b56226ff993", 00:08:03.429 "is_configured": true, 00:08:03.429 "data_offset": 0, 00:08:03.429 "data_size": 65536 00:08:03.429 }, 00:08:03.429 { 00:08:03.429 "name": "BaseBdev2", 00:08:03.429 "uuid": "ee68cf96-9827-4926-8897-1a236df0d799", 00:08:03.429 "is_configured": true, 00:08:03.429 "data_offset": 0, 00:08:03.429 "data_size": 65536 00:08:03.429 } 00:08:03.429 ] 00:08:03.429 } 00:08:03.429 } 00:08:03.429 }' 00:08:03.429 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.429 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:03.429 BaseBdev2' 00:08:03.429 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.689 [2024-12-07 02:41:14.623742] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.689 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.689 "name": "Existed_Raid", 00:08:03.689 "uuid": "64ae096a-bde9-4663-91db-1bb992c31299", 00:08:03.689 "strip_size_kb": 0, 00:08:03.689 "state": "online", 00:08:03.689 "raid_level": "raid1", 00:08:03.689 "superblock": false, 00:08:03.689 "num_base_bdevs": 2, 00:08:03.690 "num_base_bdevs_discovered": 1, 00:08:03.690 "num_base_bdevs_operational": 1, 00:08:03.690 "base_bdevs_list": [ 00:08:03.690 { 00:08:03.690 "name": null, 00:08:03.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.690 "is_configured": false, 00:08:03.690 "data_offset": 0, 00:08:03.690 "data_size": 65536 00:08:03.690 }, 00:08:03.690 { 00:08:03.690 "name": "BaseBdev2", 00:08:03.690 "uuid": "ee68cf96-9827-4926-8897-1a236df0d799", 00:08:03.690 "is_configured": true, 00:08:03.690 "data_offset": 0, 00:08:03.690 "data_size": 65536 00:08:03.690 } 00:08:03.690 ] 00:08:03.690 }' 00:08:03.690 02:41:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.690 02:41:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.258 [2024-12-07 02:41:15.143732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:04.258 [2024-12-07 02:41:15.143876] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.258 [2024-12-07 02:41:15.164696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.258 [2024-12-07 02:41:15.164747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.258 [2024-12-07 02:41:15.164760] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74160 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74160 ']' 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74160 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74160 00:08:04.258 killing process with pid 74160 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74160' 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74160 00:08:04.258 [2024-12-07 02:41:15.246820] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.258 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74160 00:08:04.258 [2024-12-07 02:41:15.248359] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:04.829 00:08:04.829 real 0m4.113s 00:08:04.829 user 0m6.301s 00:08:04.829 sys 0m0.874s 00:08:04.829 ************************************ 00:08:04.829 END TEST raid_state_function_test 00:08:04.829 ************************************ 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.829 02:41:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:04.829 02:41:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:04.829 02:41:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.829 02:41:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.829 ************************************ 00:08:04.829 START TEST raid_state_function_test_sb 00:08:04.829 ************************************ 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:04.829 Process raid pid: 74402 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74402 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74402' 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74402 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74402 ']' 00:08:04.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.829 02:41:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.829 [2024-12-07 02:41:15.796343] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:04.829 [2024-12-07 02:41:15.796474] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.089 [2024-12-07 02:41:15.958281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.089 [2024-12-07 02:41:16.027824] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.089 [2024-12-07 02:41:16.103024] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.089 [2024-12-07 02:41:16.103062] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 [2024-12-07 02:41:16.617806] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.660 [2024-12-07 02:41:16.617858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.660 [2024-12-07 02:41:16.617871] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.660 [2024-12-07 02:41:16.617881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.660 "name": "Existed_Raid", 00:08:05.660 "uuid": "8a752993-8072-43fe-8ba0-ca55f2dba18b", 00:08:05.660 "strip_size_kb": 0, 00:08:05.660 "state": "configuring", 00:08:05.660 "raid_level": "raid1", 00:08:05.660 "superblock": true, 00:08:05.660 "num_base_bdevs": 2, 00:08:05.660 "num_base_bdevs_discovered": 0, 00:08:05.660 "num_base_bdevs_operational": 2, 00:08:05.660 "base_bdevs_list": [ 00:08:05.660 { 00:08:05.660 "name": "BaseBdev1", 00:08:05.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.660 "is_configured": false, 00:08:05.660 "data_offset": 0, 00:08:05.660 "data_size": 0 00:08:05.660 }, 00:08:05.660 { 00:08:05.660 "name": "BaseBdev2", 00:08:05.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.660 "is_configured": false, 00:08:05.660 "data_offset": 0, 00:08:05.660 "data_size": 0 00:08:05.660 } 00:08:05.660 ] 00:08:05.660 }' 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.660 02:41:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.231 [2024-12-07 02:41:17.088890] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.231 [2024-12-07 02:41:17.088993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.231 [2024-12-07 02:41:17.100921] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:06.231 [2024-12-07 02:41:17.100996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:06.231 [2024-12-07 02:41:17.101021] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.231 [2024-12-07 02:41:17.101043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.231 [2024-12-07 02:41:17.128203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.231 BaseBdev1 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.231 [ 00:08:06.231 { 00:08:06.231 "name": "BaseBdev1", 00:08:06.231 "aliases": [ 00:08:06.231 "d6f7b183-499a-4d64-bb39-20367e6372a2" 00:08:06.231 ], 00:08:06.231 "product_name": "Malloc disk", 00:08:06.231 "block_size": 512, 00:08:06.231 "num_blocks": 65536, 00:08:06.231 "uuid": "d6f7b183-499a-4d64-bb39-20367e6372a2", 00:08:06.231 "assigned_rate_limits": { 00:08:06.231 "rw_ios_per_sec": 0, 00:08:06.231 "rw_mbytes_per_sec": 0, 00:08:06.231 "r_mbytes_per_sec": 0, 00:08:06.231 "w_mbytes_per_sec": 0 00:08:06.231 }, 00:08:06.231 "claimed": true, 00:08:06.231 "claim_type": "exclusive_write", 00:08:06.231 "zoned": false, 00:08:06.231 "supported_io_types": { 00:08:06.231 "read": true, 00:08:06.231 "write": true, 00:08:06.231 "unmap": true, 00:08:06.231 "flush": true, 00:08:06.231 "reset": true, 00:08:06.231 "nvme_admin": false, 00:08:06.231 "nvme_io": false, 00:08:06.231 "nvme_io_md": false, 00:08:06.231 "write_zeroes": true, 00:08:06.231 "zcopy": true, 00:08:06.231 "get_zone_info": false, 00:08:06.231 "zone_management": false, 00:08:06.231 "zone_append": false, 00:08:06.231 "compare": false, 00:08:06.231 "compare_and_write": false, 00:08:06.231 "abort": true, 00:08:06.231 "seek_hole": false, 00:08:06.231 "seek_data": false, 00:08:06.231 "copy": true, 00:08:06.231 "nvme_iov_md": false 00:08:06.231 }, 00:08:06.231 "memory_domains": [ 00:08:06.231 { 00:08:06.231 "dma_device_id": "system", 00:08:06.231 "dma_device_type": 1 00:08:06.231 }, 00:08:06.231 { 00:08:06.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.231 "dma_device_type": 2 00:08:06.231 } 00:08:06.231 ], 00:08:06.231 "driver_specific": {} 00:08:06.231 } 00:08:06.231 ] 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.231 "name": "Existed_Raid", 00:08:06.231 "uuid": "270334fc-9a69-4000-b9cd-1f718a4fa807", 00:08:06.231 "strip_size_kb": 0, 00:08:06.231 "state": "configuring", 00:08:06.231 "raid_level": "raid1", 00:08:06.231 "superblock": true, 00:08:06.231 "num_base_bdevs": 2, 00:08:06.231 "num_base_bdevs_discovered": 1, 00:08:06.231 "num_base_bdevs_operational": 2, 00:08:06.231 "base_bdevs_list": [ 00:08:06.231 { 00:08:06.231 "name": "BaseBdev1", 00:08:06.231 "uuid": "d6f7b183-499a-4d64-bb39-20367e6372a2", 00:08:06.231 "is_configured": true, 00:08:06.231 "data_offset": 2048, 00:08:06.231 "data_size": 63488 00:08:06.231 }, 00:08:06.231 { 00:08:06.231 "name": "BaseBdev2", 00:08:06.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.231 "is_configured": false, 00:08:06.231 "data_offset": 0, 00:08:06.231 "data_size": 0 00:08:06.231 } 00:08:06.231 ] 00:08:06.231 }' 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.231 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.801 [2024-12-07 02:41:17.595710] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:06.801 [2024-12-07 02:41:17.595802] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.801 [2024-12-07 02:41:17.603748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.801 [2024-12-07 02:41:17.605864] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:06.801 [2024-12-07 02:41:17.605906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.801 "name": "Existed_Raid", 00:08:06.801 "uuid": "c6c9bcd0-a8c1-47c9-bffd-25db236f151a", 00:08:06.801 "strip_size_kb": 0, 00:08:06.801 "state": "configuring", 00:08:06.801 "raid_level": "raid1", 00:08:06.801 "superblock": true, 00:08:06.801 "num_base_bdevs": 2, 00:08:06.801 "num_base_bdevs_discovered": 1, 00:08:06.801 "num_base_bdevs_operational": 2, 00:08:06.801 "base_bdevs_list": [ 00:08:06.801 { 00:08:06.801 "name": "BaseBdev1", 00:08:06.801 "uuid": "d6f7b183-499a-4d64-bb39-20367e6372a2", 00:08:06.801 "is_configured": true, 00:08:06.801 "data_offset": 2048, 00:08:06.801 "data_size": 63488 00:08:06.801 }, 00:08:06.801 { 00:08:06.801 "name": "BaseBdev2", 00:08:06.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.801 "is_configured": false, 00:08:06.801 "data_offset": 0, 00:08:06.801 "data_size": 0 00:08:06.801 } 00:08:06.801 ] 00:08:06.801 }' 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.801 02:41:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.061 [2024-12-07 02:41:18.093813] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.061 [2024-12-07 02:41:18.094147] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:07.061 [2024-12-07 02:41:18.094210] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:07.061 [2024-12-07 02:41:18.094610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:07.061 BaseBdev2 00:08:07.061 [2024-12-07 02:41:18.094833] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:07.061 [2024-12-07 02:41:18.094862] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:07.061 [2024-12-07 02:41:18.095006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.061 [ 00:08:07.061 { 00:08:07.061 "name": "BaseBdev2", 00:08:07.061 "aliases": [ 00:08:07.061 "727d0b27-d8bd-4f01-b7ec-2ed06db5407d" 00:08:07.061 ], 00:08:07.061 "product_name": "Malloc disk", 00:08:07.061 "block_size": 512, 00:08:07.061 "num_blocks": 65536, 00:08:07.061 "uuid": "727d0b27-d8bd-4f01-b7ec-2ed06db5407d", 00:08:07.061 "assigned_rate_limits": { 00:08:07.061 "rw_ios_per_sec": 0, 00:08:07.061 "rw_mbytes_per_sec": 0, 00:08:07.061 "r_mbytes_per_sec": 0, 00:08:07.061 "w_mbytes_per_sec": 0 00:08:07.061 }, 00:08:07.061 "claimed": true, 00:08:07.061 "claim_type": "exclusive_write", 00:08:07.061 "zoned": false, 00:08:07.061 "supported_io_types": { 00:08:07.061 "read": true, 00:08:07.061 "write": true, 00:08:07.061 "unmap": true, 00:08:07.061 "flush": true, 00:08:07.061 "reset": true, 00:08:07.061 "nvme_admin": false, 00:08:07.061 "nvme_io": false, 00:08:07.061 "nvme_io_md": false, 00:08:07.061 "write_zeroes": true, 00:08:07.061 "zcopy": true, 00:08:07.061 "get_zone_info": false, 00:08:07.061 "zone_management": false, 00:08:07.061 "zone_append": false, 00:08:07.061 "compare": false, 00:08:07.061 "compare_and_write": false, 00:08:07.061 "abort": true, 00:08:07.061 "seek_hole": false, 00:08:07.061 "seek_data": false, 00:08:07.061 "copy": true, 00:08:07.061 "nvme_iov_md": false 00:08:07.061 }, 00:08:07.061 "memory_domains": [ 00:08:07.061 { 00:08:07.061 "dma_device_id": "system", 00:08:07.061 "dma_device_type": 1 00:08:07.061 }, 00:08:07.061 { 00:08:07.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.061 "dma_device_type": 2 00:08:07.061 } 00:08:07.061 ], 00:08:07.061 "driver_specific": {} 00:08:07.061 } 00:08:07.061 ] 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.061 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.320 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.320 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.320 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.320 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.320 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.320 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.320 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.320 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.320 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.320 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.320 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.320 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.320 "name": "Existed_Raid", 00:08:07.320 "uuid": "c6c9bcd0-a8c1-47c9-bffd-25db236f151a", 00:08:07.320 "strip_size_kb": 0, 00:08:07.320 "state": "online", 00:08:07.320 "raid_level": "raid1", 00:08:07.320 "superblock": true, 00:08:07.320 "num_base_bdevs": 2, 00:08:07.320 "num_base_bdevs_discovered": 2, 00:08:07.320 "num_base_bdevs_operational": 2, 00:08:07.320 "base_bdevs_list": [ 00:08:07.320 { 00:08:07.320 "name": "BaseBdev1", 00:08:07.320 "uuid": "d6f7b183-499a-4d64-bb39-20367e6372a2", 00:08:07.320 "is_configured": true, 00:08:07.320 "data_offset": 2048, 00:08:07.320 "data_size": 63488 00:08:07.320 }, 00:08:07.320 { 00:08:07.320 "name": "BaseBdev2", 00:08:07.321 "uuid": "727d0b27-d8bd-4f01-b7ec-2ed06db5407d", 00:08:07.321 "is_configured": true, 00:08:07.321 "data_offset": 2048, 00:08:07.321 "data_size": 63488 00:08:07.321 } 00:08:07.321 ] 00:08:07.321 }' 00:08:07.321 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.321 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.581 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:07.581 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:07.581 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:07.581 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:07.581 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:07.581 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:07.581 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:07.581 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.581 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:07.581 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.581 [2024-12-07 02:41:18.605269] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.581 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.581 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:07.581 "name": "Existed_Raid", 00:08:07.581 "aliases": [ 00:08:07.581 "c6c9bcd0-a8c1-47c9-bffd-25db236f151a" 00:08:07.581 ], 00:08:07.581 "product_name": "Raid Volume", 00:08:07.581 "block_size": 512, 00:08:07.581 "num_blocks": 63488, 00:08:07.581 "uuid": "c6c9bcd0-a8c1-47c9-bffd-25db236f151a", 00:08:07.581 "assigned_rate_limits": { 00:08:07.581 "rw_ios_per_sec": 0, 00:08:07.581 "rw_mbytes_per_sec": 0, 00:08:07.581 "r_mbytes_per_sec": 0, 00:08:07.581 "w_mbytes_per_sec": 0 00:08:07.581 }, 00:08:07.581 "claimed": false, 00:08:07.581 "zoned": false, 00:08:07.581 "supported_io_types": { 00:08:07.581 "read": true, 00:08:07.581 "write": true, 00:08:07.581 "unmap": false, 00:08:07.581 "flush": false, 00:08:07.581 "reset": true, 00:08:07.581 "nvme_admin": false, 00:08:07.581 "nvme_io": false, 00:08:07.581 "nvme_io_md": false, 00:08:07.581 "write_zeroes": true, 00:08:07.581 "zcopy": false, 00:08:07.581 "get_zone_info": false, 00:08:07.581 "zone_management": false, 00:08:07.581 "zone_append": false, 00:08:07.581 "compare": false, 00:08:07.581 "compare_and_write": false, 00:08:07.581 "abort": false, 00:08:07.581 "seek_hole": false, 00:08:07.581 "seek_data": false, 00:08:07.581 "copy": false, 00:08:07.581 "nvme_iov_md": false 00:08:07.581 }, 00:08:07.581 "memory_domains": [ 00:08:07.581 { 00:08:07.581 "dma_device_id": "system", 00:08:07.581 "dma_device_type": 1 00:08:07.581 }, 00:08:07.581 { 00:08:07.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.581 "dma_device_type": 2 00:08:07.581 }, 00:08:07.581 { 00:08:07.581 "dma_device_id": "system", 00:08:07.581 "dma_device_type": 1 00:08:07.581 }, 00:08:07.581 { 00:08:07.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.581 "dma_device_type": 2 00:08:07.581 } 00:08:07.581 ], 00:08:07.581 "driver_specific": { 00:08:07.581 "raid": { 00:08:07.581 "uuid": "c6c9bcd0-a8c1-47c9-bffd-25db236f151a", 00:08:07.581 "strip_size_kb": 0, 00:08:07.581 "state": "online", 00:08:07.581 "raid_level": "raid1", 00:08:07.581 "superblock": true, 00:08:07.581 "num_base_bdevs": 2, 00:08:07.581 "num_base_bdevs_discovered": 2, 00:08:07.581 "num_base_bdevs_operational": 2, 00:08:07.581 "base_bdevs_list": [ 00:08:07.581 { 00:08:07.581 "name": "BaseBdev1", 00:08:07.581 "uuid": "d6f7b183-499a-4d64-bb39-20367e6372a2", 00:08:07.581 "is_configured": true, 00:08:07.581 "data_offset": 2048, 00:08:07.581 "data_size": 63488 00:08:07.581 }, 00:08:07.581 { 00:08:07.581 "name": "BaseBdev2", 00:08:07.581 "uuid": "727d0b27-d8bd-4f01-b7ec-2ed06db5407d", 00:08:07.581 "is_configured": true, 00:08:07.581 "data_offset": 2048, 00:08:07.581 "data_size": 63488 00:08:07.581 } 00:08:07.581 ] 00:08:07.581 } 00:08:07.581 } 00:08:07.581 }' 00:08:07.581 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:07.842 BaseBdev2' 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.842 [2024-12-07 02:41:18.848691] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.842 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.102 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.102 "name": "Existed_Raid", 00:08:08.102 "uuid": "c6c9bcd0-a8c1-47c9-bffd-25db236f151a", 00:08:08.102 "strip_size_kb": 0, 00:08:08.102 "state": "online", 00:08:08.102 "raid_level": "raid1", 00:08:08.102 "superblock": true, 00:08:08.102 "num_base_bdevs": 2, 00:08:08.102 "num_base_bdevs_discovered": 1, 00:08:08.102 "num_base_bdevs_operational": 1, 00:08:08.102 "base_bdevs_list": [ 00:08:08.102 { 00:08:08.102 "name": null, 00:08:08.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.102 "is_configured": false, 00:08:08.102 "data_offset": 0, 00:08:08.102 "data_size": 63488 00:08:08.102 }, 00:08:08.102 { 00:08:08.102 "name": "BaseBdev2", 00:08:08.102 "uuid": "727d0b27-d8bd-4f01-b7ec-2ed06db5407d", 00:08:08.102 "is_configured": true, 00:08:08.102 "data_offset": 2048, 00:08:08.102 "data_size": 63488 00:08:08.102 } 00:08:08.102 ] 00:08:08.102 }' 00:08:08.102 02:41:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.102 02:41:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.362 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:08.362 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.363 [2024-12-07 02:41:19.380713] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:08.363 [2024-12-07 02:41:19.380868] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.363 [2024-12-07 02:41:19.401716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.363 [2024-12-07 02:41:19.401839] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.363 [2024-12-07 02:41:19.401882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.363 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74402 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74402 ']' 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74402 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74402 00:08:08.622 killing process with pid 74402 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74402' 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74402 00:08:08.622 [2024-12-07 02:41:19.484663] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.622 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74402 00:08:08.622 [2024-12-07 02:41:19.486208] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.882 02:41:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:08.882 00:08:08.882 real 0m4.149s 00:08:08.882 user 0m6.390s 00:08:08.882 sys 0m0.890s 00:08:08.882 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.882 02:41:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.882 ************************************ 00:08:08.882 END TEST raid_state_function_test_sb 00:08:08.882 ************************************ 00:08:08.882 02:41:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:08.882 02:41:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:08.882 02:41:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.882 02:41:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.882 ************************************ 00:08:08.882 START TEST raid_superblock_test 00:08:08.882 ************************************ 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74643 00:08:08.882 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:08.883 02:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74643 00:08:08.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.883 02:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74643 ']' 00:08:08.883 02:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.883 02:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.883 02:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.883 02:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.883 02:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.143 [2024-12-07 02:41:20.018642] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:09.143 [2024-12-07 02:41:20.018852] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74643 ] 00:08:09.143 [2024-12-07 02:41:20.185241] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.402 [2024-12-07 02:41:20.255225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.402 [2024-12-07 02:41:20.332602] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.402 [2024-12-07 02:41:20.332740] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.972 malloc1 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.972 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.972 [2024-12-07 02:41:20.876191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:09.973 [2024-12-07 02:41:20.876345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.973 [2024-12-07 02:41:20.876369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:09.973 [2024-12-07 02:41:20.876384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.973 [2024-12-07 02:41:20.878889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.973 [2024-12-07 02:41:20.878931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:09.973 pt1 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.973 malloc2 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.973 [2024-12-07 02:41:20.934837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:09.973 [2024-12-07 02:41:20.935017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.973 [2024-12-07 02:41:20.935079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:09.973 [2024-12-07 02:41:20.935141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.973 [2024-12-07 02:41:20.938932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.973 [2024-12-07 02:41:20.939019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:09.973 pt2 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.973 [2024-12-07 02:41:20.947250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:09.973 [2024-12-07 02:41:20.949513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:09.973 [2024-12-07 02:41:20.949710] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:09.973 [2024-12-07 02:41:20.949762] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:09.973 [2024-12-07 02:41:20.950037] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:09.973 [2024-12-07 02:41:20.950203] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:09.973 [2024-12-07 02:41:20.950240] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:09.973 [2024-12-07 02:41:20.950414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.973 02:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.973 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.973 "name": "raid_bdev1", 00:08:09.973 "uuid": "1825483b-2029-488c-960a-d82982fd3c17", 00:08:09.973 "strip_size_kb": 0, 00:08:09.973 "state": "online", 00:08:09.973 "raid_level": "raid1", 00:08:09.973 "superblock": true, 00:08:09.973 "num_base_bdevs": 2, 00:08:09.973 "num_base_bdevs_discovered": 2, 00:08:09.973 "num_base_bdevs_operational": 2, 00:08:09.973 "base_bdevs_list": [ 00:08:09.973 { 00:08:09.973 "name": "pt1", 00:08:09.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.973 "is_configured": true, 00:08:09.973 "data_offset": 2048, 00:08:09.973 "data_size": 63488 00:08:09.973 }, 00:08:09.973 { 00:08:09.973 "name": "pt2", 00:08:09.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.973 "is_configured": true, 00:08:09.973 "data_offset": 2048, 00:08:09.973 "data_size": 63488 00:08:09.973 } 00:08:09.973 ] 00:08:09.973 }' 00:08:09.973 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.973 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.543 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:10.543 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:10.543 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:10.543 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:10.543 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:10.543 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:10.543 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:10.543 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:10.543 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.543 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.543 [2024-12-07 02:41:21.390735] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.543 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.543 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:10.543 "name": "raid_bdev1", 00:08:10.543 "aliases": [ 00:08:10.543 "1825483b-2029-488c-960a-d82982fd3c17" 00:08:10.543 ], 00:08:10.543 "product_name": "Raid Volume", 00:08:10.543 "block_size": 512, 00:08:10.543 "num_blocks": 63488, 00:08:10.543 "uuid": "1825483b-2029-488c-960a-d82982fd3c17", 00:08:10.543 "assigned_rate_limits": { 00:08:10.543 "rw_ios_per_sec": 0, 00:08:10.543 "rw_mbytes_per_sec": 0, 00:08:10.543 "r_mbytes_per_sec": 0, 00:08:10.543 "w_mbytes_per_sec": 0 00:08:10.543 }, 00:08:10.543 "claimed": false, 00:08:10.543 "zoned": false, 00:08:10.543 "supported_io_types": { 00:08:10.543 "read": true, 00:08:10.543 "write": true, 00:08:10.543 "unmap": false, 00:08:10.543 "flush": false, 00:08:10.543 "reset": true, 00:08:10.543 "nvme_admin": false, 00:08:10.543 "nvme_io": false, 00:08:10.543 "nvme_io_md": false, 00:08:10.543 "write_zeroes": true, 00:08:10.543 "zcopy": false, 00:08:10.543 "get_zone_info": false, 00:08:10.543 "zone_management": false, 00:08:10.543 "zone_append": false, 00:08:10.543 "compare": false, 00:08:10.543 "compare_and_write": false, 00:08:10.543 "abort": false, 00:08:10.543 "seek_hole": false, 00:08:10.543 "seek_data": false, 00:08:10.543 "copy": false, 00:08:10.543 "nvme_iov_md": false 00:08:10.543 }, 00:08:10.543 "memory_domains": [ 00:08:10.543 { 00:08:10.543 "dma_device_id": "system", 00:08:10.543 "dma_device_type": 1 00:08:10.543 }, 00:08:10.543 { 00:08:10.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.543 "dma_device_type": 2 00:08:10.543 }, 00:08:10.543 { 00:08:10.543 "dma_device_id": "system", 00:08:10.543 "dma_device_type": 1 00:08:10.543 }, 00:08:10.544 { 00:08:10.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.544 "dma_device_type": 2 00:08:10.544 } 00:08:10.544 ], 00:08:10.544 "driver_specific": { 00:08:10.544 "raid": { 00:08:10.544 "uuid": "1825483b-2029-488c-960a-d82982fd3c17", 00:08:10.544 "strip_size_kb": 0, 00:08:10.544 "state": "online", 00:08:10.544 "raid_level": "raid1", 00:08:10.544 "superblock": true, 00:08:10.544 "num_base_bdevs": 2, 00:08:10.544 "num_base_bdevs_discovered": 2, 00:08:10.544 "num_base_bdevs_operational": 2, 00:08:10.544 "base_bdevs_list": [ 00:08:10.544 { 00:08:10.544 "name": "pt1", 00:08:10.544 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:10.544 "is_configured": true, 00:08:10.544 "data_offset": 2048, 00:08:10.544 "data_size": 63488 00:08:10.544 }, 00:08:10.544 { 00:08:10.544 "name": "pt2", 00:08:10.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:10.544 "is_configured": true, 00:08:10.544 "data_offset": 2048, 00:08:10.544 "data_size": 63488 00:08:10.544 } 00:08:10.544 ] 00:08:10.544 } 00:08:10.544 } 00:08:10.544 }' 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:10.544 pt2' 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.544 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.544 [2024-12-07 02:41:21.602254] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1825483b-2029-488c-960a-d82982fd3c17 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1825483b-2029-488c-960a-d82982fd3c17 ']' 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.803 [2024-12-07 02:41:21.645946] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:10.803 [2024-12-07 02:41:21.645972] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.803 [2024-12-07 02:41:21.646043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.803 [2024-12-07 02:41:21.646116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.803 [2024-12-07 02:41:21.646126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:10.803 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.804 [2024-12-07 02:41:21.781787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:10.804 [2024-12-07 02:41:21.784000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:10.804 [2024-12-07 02:41:21.784123] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:10.804 [2024-12-07 02:41:21.784218] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:10.804 [2024-12-07 02:41:21.784300] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:10.804 [2024-12-07 02:41:21.784330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:10.804 request: 00:08:10.804 { 00:08:10.804 "name": "raid_bdev1", 00:08:10.804 "raid_level": "raid1", 00:08:10.804 "base_bdevs": [ 00:08:10.804 "malloc1", 00:08:10.804 "malloc2" 00:08:10.804 ], 00:08:10.804 "superblock": false, 00:08:10.804 "method": "bdev_raid_create", 00:08:10.804 "req_id": 1 00:08:10.804 } 00:08:10.804 Got JSON-RPC error response 00:08:10.804 response: 00:08:10.804 { 00:08:10.804 "code": -17, 00:08:10.804 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:10.804 } 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.804 [2024-12-07 02:41:21.833674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:10.804 [2024-12-07 02:41:21.833772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.804 [2024-12-07 02:41:21.833809] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:10.804 [2024-12-07 02:41:21.833836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.804 [2024-12-07 02:41:21.836361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.804 [2024-12-07 02:41:21.836428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:10.804 [2024-12-07 02:41:21.836536] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:10.804 [2024-12-07 02:41:21.836612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:10.804 pt1 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.804 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.804 "name": "raid_bdev1", 00:08:10.804 "uuid": "1825483b-2029-488c-960a-d82982fd3c17", 00:08:10.804 "strip_size_kb": 0, 00:08:10.804 "state": "configuring", 00:08:10.804 "raid_level": "raid1", 00:08:10.804 "superblock": true, 00:08:10.804 "num_base_bdevs": 2, 00:08:10.804 "num_base_bdevs_discovered": 1, 00:08:10.804 "num_base_bdevs_operational": 2, 00:08:10.804 "base_bdevs_list": [ 00:08:10.804 { 00:08:10.804 "name": "pt1", 00:08:10.804 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:10.804 "is_configured": true, 00:08:10.804 "data_offset": 2048, 00:08:10.804 "data_size": 63488 00:08:10.804 }, 00:08:10.804 { 00:08:10.804 "name": null, 00:08:10.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:10.805 "is_configured": false, 00:08:10.805 "data_offset": 2048, 00:08:10.805 "data_size": 63488 00:08:10.805 } 00:08:10.805 ] 00:08:10.805 }' 00:08:10.805 02:41:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.805 02:41:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.374 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:11.374 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:11.374 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:11.374 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:11.374 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.374 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.374 [2024-12-07 02:41:22.280912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:11.374 [2024-12-07 02:41:22.281001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.374 [2024-12-07 02:41:22.281027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:11.374 [2024-12-07 02:41:22.281037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.374 [2024-12-07 02:41:22.281528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.374 [2024-12-07 02:41:22.281558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:11.374 [2024-12-07 02:41:22.281663] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:11.374 [2024-12-07 02:41:22.281686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:11.374 [2024-12-07 02:41:22.281786] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:11.374 [2024-12-07 02:41:22.281802] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:11.374 [2024-12-07 02:41:22.282054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:11.374 [2024-12-07 02:41:22.282177] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:11.374 [2024-12-07 02:41:22.282193] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:11.374 [2024-12-07 02:41:22.282303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.374 pt2 00:08:11.374 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.375 "name": "raid_bdev1", 00:08:11.375 "uuid": "1825483b-2029-488c-960a-d82982fd3c17", 00:08:11.375 "strip_size_kb": 0, 00:08:11.375 "state": "online", 00:08:11.375 "raid_level": "raid1", 00:08:11.375 "superblock": true, 00:08:11.375 "num_base_bdevs": 2, 00:08:11.375 "num_base_bdevs_discovered": 2, 00:08:11.375 "num_base_bdevs_operational": 2, 00:08:11.375 "base_bdevs_list": [ 00:08:11.375 { 00:08:11.375 "name": "pt1", 00:08:11.375 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.375 "is_configured": true, 00:08:11.375 "data_offset": 2048, 00:08:11.375 "data_size": 63488 00:08:11.375 }, 00:08:11.375 { 00:08:11.375 "name": "pt2", 00:08:11.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.375 "is_configured": true, 00:08:11.375 "data_offset": 2048, 00:08:11.375 "data_size": 63488 00:08:11.375 } 00:08:11.375 ] 00:08:11.375 }' 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.375 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.943 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:11.943 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:11.943 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.943 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.943 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.943 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.943 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.943 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:11.943 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.943 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.944 [2024-12-07 02:41:22.736357] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.944 "name": "raid_bdev1", 00:08:11.944 "aliases": [ 00:08:11.944 "1825483b-2029-488c-960a-d82982fd3c17" 00:08:11.944 ], 00:08:11.944 "product_name": "Raid Volume", 00:08:11.944 "block_size": 512, 00:08:11.944 "num_blocks": 63488, 00:08:11.944 "uuid": "1825483b-2029-488c-960a-d82982fd3c17", 00:08:11.944 "assigned_rate_limits": { 00:08:11.944 "rw_ios_per_sec": 0, 00:08:11.944 "rw_mbytes_per_sec": 0, 00:08:11.944 "r_mbytes_per_sec": 0, 00:08:11.944 "w_mbytes_per_sec": 0 00:08:11.944 }, 00:08:11.944 "claimed": false, 00:08:11.944 "zoned": false, 00:08:11.944 "supported_io_types": { 00:08:11.944 "read": true, 00:08:11.944 "write": true, 00:08:11.944 "unmap": false, 00:08:11.944 "flush": false, 00:08:11.944 "reset": true, 00:08:11.944 "nvme_admin": false, 00:08:11.944 "nvme_io": false, 00:08:11.944 "nvme_io_md": false, 00:08:11.944 "write_zeroes": true, 00:08:11.944 "zcopy": false, 00:08:11.944 "get_zone_info": false, 00:08:11.944 "zone_management": false, 00:08:11.944 "zone_append": false, 00:08:11.944 "compare": false, 00:08:11.944 "compare_and_write": false, 00:08:11.944 "abort": false, 00:08:11.944 "seek_hole": false, 00:08:11.944 "seek_data": false, 00:08:11.944 "copy": false, 00:08:11.944 "nvme_iov_md": false 00:08:11.944 }, 00:08:11.944 "memory_domains": [ 00:08:11.944 { 00:08:11.944 "dma_device_id": "system", 00:08:11.944 "dma_device_type": 1 00:08:11.944 }, 00:08:11.944 { 00:08:11.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.944 "dma_device_type": 2 00:08:11.944 }, 00:08:11.944 { 00:08:11.944 "dma_device_id": "system", 00:08:11.944 "dma_device_type": 1 00:08:11.944 }, 00:08:11.944 { 00:08:11.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.944 "dma_device_type": 2 00:08:11.944 } 00:08:11.944 ], 00:08:11.944 "driver_specific": { 00:08:11.944 "raid": { 00:08:11.944 "uuid": "1825483b-2029-488c-960a-d82982fd3c17", 00:08:11.944 "strip_size_kb": 0, 00:08:11.944 "state": "online", 00:08:11.944 "raid_level": "raid1", 00:08:11.944 "superblock": true, 00:08:11.944 "num_base_bdevs": 2, 00:08:11.944 "num_base_bdevs_discovered": 2, 00:08:11.944 "num_base_bdevs_operational": 2, 00:08:11.944 "base_bdevs_list": [ 00:08:11.944 { 00:08:11.944 "name": "pt1", 00:08:11.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.944 "is_configured": true, 00:08:11.944 "data_offset": 2048, 00:08:11.944 "data_size": 63488 00:08:11.944 }, 00:08:11.944 { 00:08:11.944 "name": "pt2", 00:08:11.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.944 "is_configured": true, 00:08:11.944 "data_offset": 2048, 00:08:11.944 "data_size": 63488 00:08:11.944 } 00:08:11.944 ] 00:08:11.944 } 00:08:11.944 } 00:08:11.944 }' 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:11.944 pt2' 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.944 [2024-12-07 02:41:22.951974] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1825483b-2029-488c-960a-d82982fd3c17 '!=' 1825483b-2029-488c-960a-d82982fd3c17 ']' 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.944 [2024-12-07 02:41:22.995731] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:11.944 02:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.944 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.224 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.224 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.224 "name": "raid_bdev1", 00:08:12.224 "uuid": "1825483b-2029-488c-960a-d82982fd3c17", 00:08:12.224 "strip_size_kb": 0, 00:08:12.224 "state": "online", 00:08:12.224 "raid_level": "raid1", 00:08:12.224 "superblock": true, 00:08:12.224 "num_base_bdevs": 2, 00:08:12.224 "num_base_bdevs_discovered": 1, 00:08:12.224 "num_base_bdevs_operational": 1, 00:08:12.224 "base_bdevs_list": [ 00:08:12.224 { 00:08:12.224 "name": null, 00:08:12.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.224 "is_configured": false, 00:08:12.224 "data_offset": 0, 00:08:12.224 "data_size": 63488 00:08:12.224 }, 00:08:12.224 { 00:08:12.224 "name": "pt2", 00:08:12.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.224 "is_configured": true, 00:08:12.224 "data_offset": 2048, 00:08:12.224 "data_size": 63488 00:08:12.224 } 00:08:12.224 ] 00:08:12.224 }' 00:08:12.224 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.224 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.494 [2024-12-07 02:41:23.451729] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.494 [2024-12-07 02:41:23.451832] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.494 [2024-12-07 02:41:23.451940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.494 [2024-12-07 02:41:23.452004] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.494 [2024-12-07 02:41:23.452015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:12.494 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.495 [2024-12-07 02:41:23.523703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:12.495 [2024-12-07 02:41:23.523758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.495 [2024-12-07 02:41:23.523778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:12.495 [2024-12-07 02:41:23.523788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.495 [2024-12-07 02:41:23.526303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.495 [2024-12-07 02:41:23.526340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:12.495 [2024-12-07 02:41:23.526423] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:12.495 [2024-12-07 02:41:23.526455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:12.495 [2024-12-07 02:41:23.526538] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:12.495 [2024-12-07 02:41:23.526546] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:12.495 [2024-12-07 02:41:23.526799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:12.495 [2024-12-07 02:41:23.526926] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:12.495 [2024-12-07 02:41:23.526939] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:12.495 [2024-12-07 02:41:23.527039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.495 pt2 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.495 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.754 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.754 "name": "raid_bdev1", 00:08:12.754 "uuid": "1825483b-2029-488c-960a-d82982fd3c17", 00:08:12.754 "strip_size_kb": 0, 00:08:12.754 "state": "online", 00:08:12.754 "raid_level": "raid1", 00:08:12.754 "superblock": true, 00:08:12.754 "num_base_bdevs": 2, 00:08:12.754 "num_base_bdevs_discovered": 1, 00:08:12.754 "num_base_bdevs_operational": 1, 00:08:12.754 "base_bdevs_list": [ 00:08:12.754 { 00:08:12.754 "name": null, 00:08:12.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.754 "is_configured": false, 00:08:12.754 "data_offset": 2048, 00:08:12.754 "data_size": 63488 00:08:12.754 }, 00:08:12.754 { 00:08:12.754 "name": "pt2", 00:08:12.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.754 "is_configured": true, 00:08:12.754 "data_offset": 2048, 00:08:12.754 "data_size": 63488 00:08:12.754 } 00:08:12.754 ] 00:08:12.754 }' 00:08:12.754 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.754 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.013 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.013 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.013 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.013 [2024-12-07 02:41:23.955719] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.013 [2024-12-07 02:41:23.955790] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.013 [2024-12-07 02:41:23.955864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.013 [2024-12-07 02:41:23.955917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.013 [2024-12-07 02:41:23.955966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:13.013 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.013 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.013 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.013 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.013 02:41:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:13.013 02:41:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.013 [2024-12-07 02:41:24.015669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.013 [2024-12-07 02:41:24.015758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.013 [2024-12-07 02:41:24.015795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:13.013 [2024-12-07 02:41:24.015832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.013 [2024-12-07 02:41:24.018183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.013 [2024-12-07 02:41:24.018247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:13.013 [2024-12-07 02:41:24.018322] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:13.013 [2024-12-07 02:41:24.018375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:13.013 [2024-12-07 02:41:24.018503] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:13.013 [2024-12-07 02:41:24.018556] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.013 [2024-12-07 02:41:24.018599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:08:13.013 [2024-12-07 02:41:24.018679] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:13.013 [2024-12-07 02:41:24.018785] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:13.013 [2024-12-07 02:41:24.018825] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:13.013 [2024-12-07 02:41:24.019061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:13.013 [2024-12-07 02:41:24.019210] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:13.013 [2024-12-07 02:41:24.019246] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:13.013 [2024-12-07 02:41:24.019387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.013 pt1 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.013 "name": "raid_bdev1", 00:08:13.013 "uuid": "1825483b-2029-488c-960a-d82982fd3c17", 00:08:13.013 "strip_size_kb": 0, 00:08:13.013 "state": "online", 00:08:13.013 "raid_level": "raid1", 00:08:13.013 "superblock": true, 00:08:13.013 "num_base_bdevs": 2, 00:08:13.013 "num_base_bdevs_discovered": 1, 00:08:13.013 "num_base_bdevs_operational": 1, 00:08:13.013 "base_bdevs_list": [ 00:08:13.013 { 00:08:13.013 "name": null, 00:08:13.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.013 "is_configured": false, 00:08:13.013 "data_offset": 2048, 00:08:13.013 "data_size": 63488 00:08:13.013 }, 00:08:13.013 { 00:08:13.013 "name": "pt2", 00:08:13.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.013 "is_configured": true, 00:08:13.013 "data_offset": 2048, 00:08:13.013 "data_size": 63488 00:08:13.013 } 00:08:13.013 ] 00:08:13.013 }' 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.013 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.581 [2024-12-07 02:41:24.507955] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1825483b-2029-488c-960a-d82982fd3c17 '!=' 1825483b-2029-488c-960a-d82982fd3c17 ']' 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74643 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74643 ']' 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74643 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74643 00:08:13.581 killing process with pid 74643 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74643' 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74643 00:08:13.581 [2024-12-07 02:41:24.577529] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.581 [2024-12-07 02:41:24.577616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.581 [2024-12-07 02:41:24.577663] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.581 [2024-12-07 02:41:24.577672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:13.581 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74643 00:08:13.581 [2024-12-07 02:41:24.618891] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:14.192 02:41:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:14.192 00:08:14.192 real 0m5.071s 00:08:14.192 user 0m8.028s 00:08:14.192 sys 0m1.110s 00:08:14.192 ************************************ 00:08:14.192 END TEST raid_superblock_test 00:08:14.192 ************************************ 00:08:14.192 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.192 02:41:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.192 02:41:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:14.192 02:41:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:14.192 02:41:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.192 02:41:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.192 ************************************ 00:08:14.192 START TEST raid_read_error_test 00:08:14.192 ************************************ 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CJMwviRdl9 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74962 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74962 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74962 ']' 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.192 02:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.192 [2024-12-07 02:41:25.182653] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:14.192 [2024-12-07 02:41:25.182846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74962 ] 00:08:14.451 [2024-12-07 02:41:25.347750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.451 [2024-12-07 02:41:25.418398] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.451 [2024-12-07 02:41:25.495165] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.451 [2024-12-07 02:41:25.495202] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.019 02:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.019 02:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:15.019 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:15.019 02:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:15.019 02:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.019 02:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.019 BaseBdev1_malloc 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.019 true 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.019 [2024-12-07 02:41:26.024916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:15.019 [2024-12-07 02:41:26.024983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.019 [2024-12-07 02:41:26.025001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:15.019 [2024-12-07 02:41:26.025017] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.019 [2024-12-07 02:41:26.027417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.019 [2024-12-07 02:41:26.027451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:15.019 BaseBdev1 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.019 BaseBdev2_malloc 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.019 true 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.019 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.019 [2024-12-07 02:41:26.090215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:15.019 [2024-12-07 02:41:26.090379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.019 [2024-12-07 02:41:26.090441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:15.019 [2024-12-07 02:41:26.090496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.019 [2024-12-07 02:41:26.093716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.019 [2024-12-07 02:41:26.093795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:15.019 BaseBdev2 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.279 [2024-12-07 02:41:26.102364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.279 [2024-12-07 02:41:26.104525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.279 [2024-12-07 02:41:26.104726] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:15.279 [2024-12-07 02:41:26.104744] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:15.279 [2024-12-07 02:41:26.105048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:15.279 [2024-12-07 02:41:26.105192] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:15.279 [2024-12-07 02:41:26.105205] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:15.279 [2024-12-07 02:41:26.105327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.279 "name": "raid_bdev1", 00:08:15.279 "uuid": "f4f7e5ee-4145-4b53-8eec-42c87707236e", 00:08:15.279 "strip_size_kb": 0, 00:08:15.279 "state": "online", 00:08:15.279 "raid_level": "raid1", 00:08:15.279 "superblock": true, 00:08:15.279 "num_base_bdevs": 2, 00:08:15.279 "num_base_bdevs_discovered": 2, 00:08:15.279 "num_base_bdevs_operational": 2, 00:08:15.279 "base_bdevs_list": [ 00:08:15.279 { 00:08:15.279 "name": "BaseBdev1", 00:08:15.279 "uuid": "05dc002c-7b28-504d-bcc7-87ce0cdd39e0", 00:08:15.279 "is_configured": true, 00:08:15.279 "data_offset": 2048, 00:08:15.279 "data_size": 63488 00:08:15.279 }, 00:08:15.279 { 00:08:15.279 "name": "BaseBdev2", 00:08:15.279 "uuid": "3601cbf5-116c-5f69-93fc-baa86f01c0d1", 00:08:15.279 "is_configured": true, 00:08:15.279 "data_offset": 2048, 00:08:15.279 "data_size": 63488 00:08:15.279 } 00:08:15.279 ] 00:08:15.279 }' 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.279 02:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.538 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:15.538 02:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:15.538 [2024-12-07 02:41:26.610096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:16.477 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:16.477 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.477 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.736 "name": "raid_bdev1", 00:08:16.736 "uuid": "f4f7e5ee-4145-4b53-8eec-42c87707236e", 00:08:16.736 "strip_size_kb": 0, 00:08:16.736 "state": "online", 00:08:16.736 "raid_level": "raid1", 00:08:16.736 "superblock": true, 00:08:16.736 "num_base_bdevs": 2, 00:08:16.736 "num_base_bdevs_discovered": 2, 00:08:16.736 "num_base_bdevs_operational": 2, 00:08:16.736 "base_bdevs_list": [ 00:08:16.736 { 00:08:16.736 "name": "BaseBdev1", 00:08:16.736 "uuid": "05dc002c-7b28-504d-bcc7-87ce0cdd39e0", 00:08:16.736 "is_configured": true, 00:08:16.736 "data_offset": 2048, 00:08:16.736 "data_size": 63488 00:08:16.736 }, 00:08:16.736 { 00:08:16.736 "name": "BaseBdev2", 00:08:16.736 "uuid": "3601cbf5-116c-5f69-93fc-baa86f01c0d1", 00:08:16.736 "is_configured": true, 00:08:16.736 "data_offset": 2048, 00:08:16.736 "data_size": 63488 00:08:16.736 } 00:08:16.736 ] 00:08:16.736 }' 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.736 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.996 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:16.996 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.996 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.996 [2024-12-07 02:41:27.983997] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:16.996 [2024-12-07 02:41:27.984124] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.996 [2024-12-07 02:41:27.986580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.996 [2024-12-07 02:41:27.986646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.996 [2024-12-07 02:41:27.986739] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.996 [2024-12-07 02:41:27.986750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:16.996 { 00:08:16.996 "results": [ 00:08:16.996 { 00:08:16.996 "job": "raid_bdev1", 00:08:16.996 "core_mask": "0x1", 00:08:16.996 "workload": "randrw", 00:08:16.996 "percentage": 50, 00:08:16.996 "status": "finished", 00:08:16.996 "queue_depth": 1, 00:08:16.996 "io_size": 131072, 00:08:16.996 "runtime": 1.374423, 00:08:16.996 "iops": 16151.505031565974, 00:08:16.996 "mibps": 2018.9381289457467, 00:08:16.996 "io_failed": 0, 00:08:16.996 "io_timeout": 0, 00:08:16.996 "avg_latency_us": 59.4783583429837, 00:08:16.996 "min_latency_us": 21.910917030567685, 00:08:16.996 "max_latency_us": 1516.7720524017468 00:08:16.996 } 00:08:16.996 ], 00:08:16.996 "core_count": 1 00:08:16.996 } 00:08:16.996 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.996 02:41:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74962 00:08:16.996 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74962 ']' 00:08:16.996 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74962 00:08:16.996 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:16.996 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.996 02:41:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74962 00:08:16.996 02:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.996 02:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.996 02:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74962' 00:08:16.996 killing process with pid 74962 00:08:16.996 02:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74962 00:08:16.996 [2024-12-07 02:41:28.034678] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.996 02:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74962 00:08:16.996 [2024-12-07 02:41:28.063522] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.567 02:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CJMwviRdl9 00:08:17.567 02:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:17.567 02:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:17.567 02:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:17.567 02:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:17.567 02:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.567 02:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:17.567 02:41:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:17.567 ************************************ 00:08:17.567 END TEST raid_read_error_test 00:08:17.567 ************************************ 00:08:17.567 00:08:17.567 real 0m3.371s 00:08:17.567 user 0m4.079s 00:08:17.567 sys 0m0.601s 00:08:17.567 02:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.567 02:41:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.567 02:41:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:17.567 02:41:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:17.567 02:41:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.567 02:41:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.567 ************************************ 00:08:17.567 START TEST raid_write_error_test 00:08:17.567 ************************************ 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3A2JDbIuQg 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75098 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75098 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75098 ']' 00:08:17.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.567 02:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.567 [2024-12-07 02:41:28.629140] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:17.567 [2024-12-07 02:41:28.629282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75098 ] 00:08:17.828 [2024-12-07 02:41:28.794755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.828 [2024-12-07 02:41:28.864890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.087 [2024-12-07 02:41:28.942594] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.087 [2024-12-07 02:41:28.942637] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.655 BaseBdev1_malloc 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.655 true 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.655 [2024-12-07 02:41:29.493223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:18.655 [2024-12-07 02:41:29.493292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.655 [2024-12-07 02:41:29.493319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:18.655 [2024-12-07 02:41:29.493330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.655 [2024-12-07 02:41:29.495703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.655 [2024-12-07 02:41:29.495796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:18.655 BaseBdev1 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.655 BaseBdev2_malloc 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.655 true 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.655 [2024-12-07 02:41:29.549843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:18.655 [2024-12-07 02:41:29.549890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.655 [2024-12-07 02:41:29.549908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:18.655 [2024-12-07 02:41:29.549916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.655 [2024-12-07 02:41:29.552225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.655 [2024-12-07 02:41:29.552299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:18.655 BaseBdev2 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.655 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.655 [2024-12-07 02:41:29.561859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.655 [2024-12-07 02:41:29.563939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.655 [2024-12-07 02:41:29.564115] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:18.655 [2024-12-07 02:41:29.564128] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:18.655 [2024-12-07 02:41:29.564378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:18.655 [2024-12-07 02:41:29.564527] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:18.656 [2024-12-07 02:41:29.564545] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:18.656 [2024-12-07 02:41:29.564693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.656 "name": "raid_bdev1", 00:08:18.656 "uuid": "aefd9e92-f483-46aa-8163-6b91bf058205", 00:08:18.656 "strip_size_kb": 0, 00:08:18.656 "state": "online", 00:08:18.656 "raid_level": "raid1", 00:08:18.656 "superblock": true, 00:08:18.656 "num_base_bdevs": 2, 00:08:18.656 "num_base_bdevs_discovered": 2, 00:08:18.656 "num_base_bdevs_operational": 2, 00:08:18.656 "base_bdevs_list": [ 00:08:18.656 { 00:08:18.656 "name": "BaseBdev1", 00:08:18.656 "uuid": "20d4e2cb-30ce-5cc2-be31-c166a259f6e3", 00:08:18.656 "is_configured": true, 00:08:18.656 "data_offset": 2048, 00:08:18.656 "data_size": 63488 00:08:18.656 }, 00:08:18.656 { 00:08:18.656 "name": "BaseBdev2", 00:08:18.656 "uuid": "76ae0af8-d60e-5bfb-ab90-6d6077a2f824", 00:08:18.656 "is_configured": true, 00:08:18.656 "data_offset": 2048, 00:08:18.656 "data_size": 63488 00:08:18.656 } 00:08:18.656 ] 00:08:18.656 }' 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.656 02:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.915 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:18.915 02:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:19.174 [2024-12-07 02:41:30.057440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.114 [2024-12-07 02:41:30.980645] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:20.114 [2024-12-07 02:41:30.980776] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.114 [2024-12-07 02:41:30.981033] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.114 02:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.114 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.114 02:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.114 "name": "raid_bdev1", 00:08:20.114 "uuid": "aefd9e92-f483-46aa-8163-6b91bf058205", 00:08:20.114 "strip_size_kb": 0, 00:08:20.114 "state": "online", 00:08:20.114 "raid_level": "raid1", 00:08:20.114 "superblock": true, 00:08:20.114 "num_base_bdevs": 2, 00:08:20.114 "num_base_bdevs_discovered": 1, 00:08:20.114 "num_base_bdevs_operational": 1, 00:08:20.114 "base_bdevs_list": [ 00:08:20.114 { 00:08:20.114 "name": null, 00:08:20.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.114 "is_configured": false, 00:08:20.114 "data_offset": 0, 00:08:20.114 "data_size": 63488 00:08:20.114 }, 00:08:20.114 { 00:08:20.114 "name": "BaseBdev2", 00:08:20.114 "uuid": "76ae0af8-d60e-5bfb-ab90-6d6077a2f824", 00:08:20.114 "is_configured": true, 00:08:20.114 "data_offset": 2048, 00:08:20.114 "data_size": 63488 00:08:20.114 } 00:08:20.114 ] 00:08:20.114 }' 00:08:20.114 02:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.114 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.373 02:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:20.373 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.373 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.373 [2024-12-07 02:41:31.385520] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.373 [2024-12-07 02:41:31.385631] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.373 [2024-12-07 02:41:31.388123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.373 [2024-12-07 02:41:31.388214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.373 [2024-12-07 02:41:31.388290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.373 [2024-12-07 02:41:31.388378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:20.373 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.373 { 00:08:20.373 "results": [ 00:08:20.373 { 00:08:20.373 "job": "raid_bdev1", 00:08:20.373 "core_mask": "0x1", 00:08:20.373 "workload": "randrw", 00:08:20.373 "percentage": 50, 00:08:20.373 "status": "finished", 00:08:20.373 "queue_depth": 1, 00:08:20.373 "io_size": 131072, 00:08:20.373 "runtime": 1.328701, 00:08:20.373 "iops": 19952.570217076678, 00:08:20.373 "mibps": 2494.0712771345848, 00:08:20.373 "io_failed": 0, 00:08:20.373 "io_timeout": 0, 00:08:20.373 "avg_latency_us": 47.68689697726197, 00:08:20.373 "min_latency_us": 20.68122270742358, 00:08:20.373 "max_latency_us": 1330.7528384279476 00:08:20.373 } 00:08:20.373 ], 00:08:20.373 "core_count": 1 00:08:20.373 } 00:08:20.373 02:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75098 00:08:20.373 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75098 ']' 00:08:20.373 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75098 00:08:20.373 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:20.373 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.373 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75098 00:08:20.373 killing process with pid 75098 00:08:20.374 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.374 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.374 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75098' 00:08:20.374 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75098 00:08:20.374 [2024-12-07 02:41:31.435384] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.374 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75098 00:08:20.633 [2024-12-07 02:41:31.463385] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.892 02:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3A2JDbIuQg 00:08:20.892 02:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:20.893 02:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:20.893 02:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:20.893 02:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:20.893 02:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.893 02:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:20.893 02:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:20.893 00:08:20.893 real 0m3.326s 00:08:20.893 user 0m3.991s 00:08:20.893 sys 0m0.651s 00:08:20.893 ************************************ 00:08:20.893 END TEST raid_write_error_test 00:08:20.893 ************************************ 00:08:20.893 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.893 02:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.893 02:41:31 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:20.893 02:41:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:20.893 02:41:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:20.893 02:41:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:20.893 02:41:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.893 02:41:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.893 ************************************ 00:08:20.893 START TEST raid_state_function_test 00:08:20.893 ************************************ 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75230 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75230' 00:08:20.893 Process raid pid: 75230 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75230 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 75230 ']' 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.893 02:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.153 [2024-12-07 02:41:32.010472] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:21.153 [2024-12-07 02:41:32.010706] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.153 [2024-12-07 02:41:32.172428] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.411 [2024-12-07 02:41:32.249324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.411 [2024-12-07 02:41:32.325326] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.411 [2024-12-07 02:41:32.325476] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.980 [2024-12-07 02:41:32.841758] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.980 [2024-12-07 02:41:32.841864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.980 [2024-12-07 02:41:32.841907] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.980 [2024-12-07 02:41:32.841930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.980 [2024-12-07 02:41:32.841947] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:21.980 [2024-12-07 02:41:32.841970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.980 "name": "Existed_Raid", 00:08:21.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.980 "strip_size_kb": 64, 00:08:21.980 "state": "configuring", 00:08:21.980 "raid_level": "raid0", 00:08:21.980 "superblock": false, 00:08:21.980 "num_base_bdevs": 3, 00:08:21.980 "num_base_bdevs_discovered": 0, 00:08:21.980 "num_base_bdevs_operational": 3, 00:08:21.980 "base_bdevs_list": [ 00:08:21.980 { 00:08:21.980 "name": "BaseBdev1", 00:08:21.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.980 "is_configured": false, 00:08:21.980 "data_offset": 0, 00:08:21.980 "data_size": 0 00:08:21.980 }, 00:08:21.980 { 00:08:21.980 "name": "BaseBdev2", 00:08:21.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.980 "is_configured": false, 00:08:21.980 "data_offset": 0, 00:08:21.980 "data_size": 0 00:08:21.980 }, 00:08:21.980 { 00:08:21.980 "name": "BaseBdev3", 00:08:21.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.980 "is_configured": false, 00:08:21.980 "data_offset": 0, 00:08:21.980 "data_size": 0 00:08:21.980 } 00:08:21.980 ] 00:08:21.980 }' 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.980 02:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.240 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.240 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.240 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.240 [2024-12-07 02:41:33.284895] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.240 [2024-12-07 02:41:33.284991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:22.240 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.240 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.240 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.240 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.240 [2024-12-07 02:41:33.296923] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.240 [2024-12-07 02:41:33.296998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.240 [2024-12-07 02:41:33.297023] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.240 [2024-12-07 02:41:33.297045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.240 [2024-12-07 02:41:33.297062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:22.240 [2024-12-07 02:41:33.297082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:22.240 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.240 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.240 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.240 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.499 [2024-12-07 02:41:33.323854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.499 BaseBdev1 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.499 [ 00:08:22.499 { 00:08:22.499 "name": "BaseBdev1", 00:08:22.499 "aliases": [ 00:08:22.499 "55fd05ee-a824-401a-bf46-1f85c1514338" 00:08:22.499 ], 00:08:22.499 "product_name": "Malloc disk", 00:08:22.499 "block_size": 512, 00:08:22.499 "num_blocks": 65536, 00:08:22.499 "uuid": "55fd05ee-a824-401a-bf46-1f85c1514338", 00:08:22.499 "assigned_rate_limits": { 00:08:22.499 "rw_ios_per_sec": 0, 00:08:22.499 "rw_mbytes_per_sec": 0, 00:08:22.499 "r_mbytes_per_sec": 0, 00:08:22.499 "w_mbytes_per_sec": 0 00:08:22.499 }, 00:08:22.499 "claimed": true, 00:08:22.499 "claim_type": "exclusive_write", 00:08:22.499 "zoned": false, 00:08:22.499 "supported_io_types": { 00:08:22.499 "read": true, 00:08:22.499 "write": true, 00:08:22.499 "unmap": true, 00:08:22.499 "flush": true, 00:08:22.499 "reset": true, 00:08:22.499 "nvme_admin": false, 00:08:22.499 "nvme_io": false, 00:08:22.499 "nvme_io_md": false, 00:08:22.499 "write_zeroes": true, 00:08:22.499 "zcopy": true, 00:08:22.499 "get_zone_info": false, 00:08:22.499 "zone_management": false, 00:08:22.499 "zone_append": false, 00:08:22.499 "compare": false, 00:08:22.499 "compare_and_write": false, 00:08:22.499 "abort": true, 00:08:22.499 "seek_hole": false, 00:08:22.499 "seek_data": false, 00:08:22.499 "copy": true, 00:08:22.499 "nvme_iov_md": false 00:08:22.499 }, 00:08:22.499 "memory_domains": [ 00:08:22.499 { 00:08:22.499 "dma_device_id": "system", 00:08:22.499 "dma_device_type": 1 00:08:22.499 }, 00:08:22.499 { 00:08:22.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.499 "dma_device_type": 2 00:08:22.499 } 00:08:22.499 ], 00:08:22.499 "driver_specific": {} 00:08:22.499 } 00:08:22.499 ] 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.499 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.499 "name": "Existed_Raid", 00:08:22.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.499 "strip_size_kb": 64, 00:08:22.499 "state": "configuring", 00:08:22.499 "raid_level": "raid0", 00:08:22.499 "superblock": false, 00:08:22.499 "num_base_bdevs": 3, 00:08:22.499 "num_base_bdevs_discovered": 1, 00:08:22.499 "num_base_bdevs_operational": 3, 00:08:22.499 "base_bdevs_list": [ 00:08:22.499 { 00:08:22.499 "name": "BaseBdev1", 00:08:22.499 "uuid": "55fd05ee-a824-401a-bf46-1f85c1514338", 00:08:22.499 "is_configured": true, 00:08:22.499 "data_offset": 0, 00:08:22.499 "data_size": 65536 00:08:22.499 }, 00:08:22.499 { 00:08:22.499 "name": "BaseBdev2", 00:08:22.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.499 "is_configured": false, 00:08:22.499 "data_offset": 0, 00:08:22.499 "data_size": 0 00:08:22.499 }, 00:08:22.499 { 00:08:22.499 "name": "BaseBdev3", 00:08:22.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.500 "is_configured": false, 00:08:22.500 "data_offset": 0, 00:08:22.500 "data_size": 0 00:08:22.500 } 00:08:22.500 ] 00:08:22.500 }' 00:08:22.500 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.500 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.875 [2024-12-07 02:41:33.815212] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.875 [2024-12-07 02:41:33.815271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.875 [2024-12-07 02:41:33.823235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.875 [2024-12-07 02:41:33.825444] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.875 [2024-12-07 02:41:33.825485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.875 [2024-12-07 02:41:33.825495] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:22.875 [2024-12-07 02:41:33.825505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.875 "name": "Existed_Raid", 00:08:22.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.875 "strip_size_kb": 64, 00:08:22.875 "state": "configuring", 00:08:22.875 "raid_level": "raid0", 00:08:22.875 "superblock": false, 00:08:22.875 "num_base_bdevs": 3, 00:08:22.875 "num_base_bdevs_discovered": 1, 00:08:22.875 "num_base_bdevs_operational": 3, 00:08:22.875 "base_bdevs_list": [ 00:08:22.875 { 00:08:22.875 "name": "BaseBdev1", 00:08:22.875 "uuid": "55fd05ee-a824-401a-bf46-1f85c1514338", 00:08:22.875 "is_configured": true, 00:08:22.875 "data_offset": 0, 00:08:22.875 "data_size": 65536 00:08:22.875 }, 00:08:22.875 { 00:08:22.875 "name": "BaseBdev2", 00:08:22.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.875 "is_configured": false, 00:08:22.875 "data_offset": 0, 00:08:22.875 "data_size": 0 00:08:22.875 }, 00:08:22.875 { 00:08:22.875 "name": "BaseBdev3", 00:08:22.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.875 "is_configured": false, 00:08:22.875 "data_offset": 0, 00:08:22.875 "data_size": 0 00:08:22.875 } 00:08:22.875 ] 00:08:22.875 }' 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.875 02:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.477 [2024-12-07 02:41:34.300824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.477 BaseBdev2 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.477 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.477 [ 00:08:23.477 { 00:08:23.477 "name": "BaseBdev2", 00:08:23.477 "aliases": [ 00:08:23.477 "ad389d5d-f014-49b0-a7dd-922a96b08509" 00:08:23.477 ], 00:08:23.477 "product_name": "Malloc disk", 00:08:23.477 "block_size": 512, 00:08:23.477 "num_blocks": 65536, 00:08:23.477 "uuid": "ad389d5d-f014-49b0-a7dd-922a96b08509", 00:08:23.477 "assigned_rate_limits": { 00:08:23.477 "rw_ios_per_sec": 0, 00:08:23.477 "rw_mbytes_per_sec": 0, 00:08:23.477 "r_mbytes_per_sec": 0, 00:08:23.478 "w_mbytes_per_sec": 0 00:08:23.478 }, 00:08:23.478 "claimed": true, 00:08:23.478 "claim_type": "exclusive_write", 00:08:23.478 "zoned": false, 00:08:23.478 "supported_io_types": { 00:08:23.478 "read": true, 00:08:23.478 "write": true, 00:08:23.478 "unmap": true, 00:08:23.478 "flush": true, 00:08:23.478 "reset": true, 00:08:23.478 "nvme_admin": false, 00:08:23.478 "nvme_io": false, 00:08:23.478 "nvme_io_md": false, 00:08:23.478 "write_zeroes": true, 00:08:23.478 "zcopy": true, 00:08:23.478 "get_zone_info": false, 00:08:23.478 "zone_management": false, 00:08:23.478 "zone_append": false, 00:08:23.478 "compare": false, 00:08:23.478 "compare_and_write": false, 00:08:23.478 "abort": true, 00:08:23.478 "seek_hole": false, 00:08:23.478 "seek_data": false, 00:08:23.478 "copy": true, 00:08:23.478 "nvme_iov_md": false 00:08:23.478 }, 00:08:23.478 "memory_domains": [ 00:08:23.478 { 00:08:23.478 "dma_device_id": "system", 00:08:23.478 "dma_device_type": 1 00:08:23.478 }, 00:08:23.478 { 00:08:23.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.478 "dma_device_type": 2 00:08:23.478 } 00:08:23.478 ], 00:08:23.478 "driver_specific": {} 00:08:23.478 } 00:08:23.478 ] 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.478 "name": "Existed_Raid", 00:08:23.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.478 "strip_size_kb": 64, 00:08:23.478 "state": "configuring", 00:08:23.478 "raid_level": "raid0", 00:08:23.478 "superblock": false, 00:08:23.478 "num_base_bdevs": 3, 00:08:23.478 "num_base_bdevs_discovered": 2, 00:08:23.478 "num_base_bdevs_operational": 3, 00:08:23.478 "base_bdevs_list": [ 00:08:23.478 { 00:08:23.478 "name": "BaseBdev1", 00:08:23.478 "uuid": "55fd05ee-a824-401a-bf46-1f85c1514338", 00:08:23.478 "is_configured": true, 00:08:23.478 "data_offset": 0, 00:08:23.478 "data_size": 65536 00:08:23.478 }, 00:08:23.478 { 00:08:23.478 "name": "BaseBdev2", 00:08:23.478 "uuid": "ad389d5d-f014-49b0-a7dd-922a96b08509", 00:08:23.478 "is_configured": true, 00:08:23.478 "data_offset": 0, 00:08:23.478 "data_size": 65536 00:08:23.478 }, 00:08:23.478 { 00:08:23.478 "name": "BaseBdev3", 00:08:23.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.478 "is_configured": false, 00:08:23.478 "data_offset": 0, 00:08:23.478 "data_size": 0 00:08:23.478 } 00:08:23.478 ] 00:08:23.478 }' 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.478 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.738 [2024-12-07 02:41:34.773164] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.738 [2024-12-07 02:41:34.773212] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:23.738 [2024-12-07 02:41:34.773225] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:23.738 [2024-12-07 02:41:34.773565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:23.738 [2024-12-07 02:41:34.773736] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:23.738 [2024-12-07 02:41:34.773751] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:23.738 [2024-12-07 02:41:34.773988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.738 BaseBdev3 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.738 [ 00:08:23.738 { 00:08:23.738 "name": "BaseBdev3", 00:08:23.738 "aliases": [ 00:08:23.738 "d94235cf-67f6-4780-9772-dd9d047778bb" 00:08:23.738 ], 00:08:23.738 "product_name": "Malloc disk", 00:08:23.738 "block_size": 512, 00:08:23.738 "num_blocks": 65536, 00:08:23.738 "uuid": "d94235cf-67f6-4780-9772-dd9d047778bb", 00:08:23.738 "assigned_rate_limits": { 00:08:23.738 "rw_ios_per_sec": 0, 00:08:23.738 "rw_mbytes_per_sec": 0, 00:08:23.738 "r_mbytes_per_sec": 0, 00:08:23.738 "w_mbytes_per_sec": 0 00:08:23.738 }, 00:08:23.738 "claimed": true, 00:08:23.738 "claim_type": "exclusive_write", 00:08:23.738 "zoned": false, 00:08:23.738 "supported_io_types": { 00:08:23.738 "read": true, 00:08:23.738 "write": true, 00:08:23.738 "unmap": true, 00:08:23.738 "flush": true, 00:08:23.738 "reset": true, 00:08:23.738 "nvme_admin": false, 00:08:23.738 "nvme_io": false, 00:08:23.738 "nvme_io_md": false, 00:08:23.738 "write_zeroes": true, 00:08:23.738 "zcopy": true, 00:08:23.738 "get_zone_info": false, 00:08:23.738 "zone_management": false, 00:08:23.738 "zone_append": false, 00:08:23.738 "compare": false, 00:08:23.738 "compare_and_write": false, 00:08:23.738 "abort": true, 00:08:23.738 "seek_hole": false, 00:08:23.738 "seek_data": false, 00:08:23.738 "copy": true, 00:08:23.738 "nvme_iov_md": false 00:08:23.738 }, 00:08:23.738 "memory_domains": [ 00:08:23.738 { 00:08:23.738 "dma_device_id": "system", 00:08:23.738 "dma_device_type": 1 00:08:23.738 }, 00:08:23.738 { 00:08:23.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.738 "dma_device_type": 2 00:08:23.738 } 00:08:23.738 ], 00:08:23.738 "driver_specific": {} 00:08:23.738 } 00:08:23.738 ] 00:08:23.738 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.998 "name": "Existed_Raid", 00:08:23.998 "uuid": "982cc634-ff48-43b2-9493-d2e851029568", 00:08:23.998 "strip_size_kb": 64, 00:08:23.998 "state": "online", 00:08:23.998 "raid_level": "raid0", 00:08:23.998 "superblock": false, 00:08:23.998 "num_base_bdevs": 3, 00:08:23.998 "num_base_bdevs_discovered": 3, 00:08:23.998 "num_base_bdevs_operational": 3, 00:08:23.998 "base_bdevs_list": [ 00:08:23.998 { 00:08:23.998 "name": "BaseBdev1", 00:08:23.998 "uuid": "55fd05ee-a824-401a-bf46-1f85c1514338", 00:08:23.998 "is_configured": true, 00:08:23.998 "data_offset": 0, 00:08:23.998 "data_size": 65536 00:08:23.998 }, 00:08:23.998 { 00:08:23.998 "name": "BaseBdev2", 00:08:23.998 "uuid": "ad389d5d-f014-49b0-a7dd-922a96b08509", 00:08:23.998 "is_configured": true, 00:08:23.998 "data_offset": 0, 00:08:23.998 "data_size": 65536 00:08:23.998 }, 00:08:23.998 { 00:08:23.998 "name": "BaseBdev3", 00:08:23.998 "uuid": "d94235cf-67f6-4780-9772-dd9d047778bb", 00:08:23.998 "is_configured": true, 00:08:23.998 "data_offset": 0, 00:08:23.998 "data_size": 65536 00:08:23.998 } 00:08:23.998 ] 00:08:23.998 }' 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.998 02:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.257 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:24.257 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:24.257 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.257 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.257 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.257 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.257 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:24.257 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.257 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.257 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.257 [2024-12-07 02:41:35.272639] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.257 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.257 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.257 "name": "Existed_Raid", 00:08:24.257 "aliases": [ 00:08:24.257 "982cc634-ff48-43b2-9493-d2e851029568" 00:08:24.257 ], 00:08:24.257 "product_name": "Raid Volume", 00:08:24.257 "block_size": 512, 00:08:24.257 "num_blocks": 196608, 00:08:24.257 "uuid": "982cc634-ff48-43b2-9493-d2e851029568", 00:08:24.257 "assigned_rate_limits": { 00:08:24.257 "rw_ios_per_sec": 0, 00:08:24.257 "rw_mbytes_per_sec": 0, 00:08:24.257 "r_mbytes_per_sec": 0, 00:08:24.257 "w_mbytes_per_sec": 0 00:08:24.257 }, 00:08:24.257 "claimed": false, 00:08:24.257 "zoned": false, 00:08:24.257 "supported_io_types": { 00:08:24.257 "read": true, 00:08:24.257 "write": true, 00:08:24.257 "unmap": true, 00:08:24.257 "flush": true, 00:08:24.257 "reset": true, 00:08:24.257 "nvme_admin": false, 00:08:24.257 "nvme_io": false, 00:08:24.257 "nvme_io_md": false, 00:08:24.257 "write_zeroes": true, 00:08:24.257 "zcopy": false, 00:08:24.257 "get_zone_info": false, 00:08:24.257 "zone_management": false, 00:08:24.257 "zone_append": false, 00:08:24.257 "compare": false, 00:08:24.257 "compare_and_write": false, 00:08:24.257 "abort": false, 00:08:24.257 "seek_hole": false, 00:08:24.257 "seek_data": false, 00:08:24.257 "copy": false, 00:08:24.257 "nvme_iov_md": false 00:08:24.257 }, 00:08:24.257 "memory_domains": [ 00:08:24.257 { 00:08:24.257 "dma_device_id": "system", 00:08:24.257 "dma_device_type": 1 00:08:24.257 }, 00:08:24.257 { 00:08:24.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.257 "dma_device_type": 2 00:08:24.257 }, 00:08:24.257 { 00:08:24.257 "dma_device_id": "system", 00:08:24.257 "dma_device_type": 1 00:08:24.257 }, 00:08:24.257 { 00:08:24.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.257 "dma_device_type": 2 00:08:24.257 }, 00:08:24.257 { 00:08:24.257 "dma_device_id": "system", 00:08:24.257 "dma_device_type": 1 00:08:24.257 }, 00:08:24.257 { 00:08:24.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.257 "dma_device_type": 2 00:08:24.257 } 00:08:24.257 ], 00:08:24.257 "driver_specific": { 00:08:24.257 "raid": { 00:08:24.257 "uuid": "982cc634-ff48-43b2-9493-d2e851029568", 00:08:24.257 "strip_size_kb": 64, 00:08:24.257 "state": "online", 00:08:24.257 "raid_level": "raid0", 00:08:24.257 "superblock": false, 00:08:24.257 "num_base_bdevs": 3, 00:08:24.257 "num_base_bdevs_discovered": 3, 00:08:24.257 "num_base_bdevs_operational": 3, 00:08:24.257 "base_bdevs_list": [ 00:08:24.257 { 00:08:24.257 "name": "BaseBdev1", 00:08:24.257 "uuid": "55fd05ee-a824-401a-bf46-1f85c1514338", 00:08:24.257 "is_configured": true, 00:08:24.257 "data_offset": 0, 00:08:24.257 "data_size": 65536 00:08:24.257 }, 00:08:24.257 { 00:08:24.257 "name": "BaseBdev2", 00:08:24.257 "uuid": "ad389d5d-f014-49b0-a7dd-922a96b08509", 00:08:24.257 "is_configured": true, 00:08:24.258 "data_offset": 0, 00:08:24.258 "data_size": 65536 00:08:24.258 }, 00:08:24.258 { 00:08:24.258 "name": "BaseBdev3", 00:08:24.258 "uuid": "d94235cf-67f6-4780-9772-dd9d047778bb", 00:08:24.258 "is_configured": true, 00:08:24.258 "data_offset": 0, 00:08:24.258 "data_size": 65536 00:08:24.258 } 00:08:24.258 ] 00:08:24.258 } 00:08:24.258 } 00:08:24.258 }' 00:08:24.258 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:24.517 BaseBdev2 00:08:24.517 BaseBdev3' 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.517 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.517 [2024-12-07 02:41:35.571900] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:24.517 [2024-12-07 02:41:35.571972] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.517 [2024-12-07 02:41:35.572049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.785 "name": "Existed_Raid", 00:08:24.785 "uuid": "982cc634-ff48-43b2-9493-d2e851029568", 00:08:24.785 "strip_size_kb": 64, 00:08:24.785 "state": "offline", 00:08:24.785 "raid_level": "raid0", 00:08:24.785 "superblock": false, 00:08:24.785 "num_base_bdevs": 3, 00:08:24.785 "num_base_bdevs_discovered": 2, 00:08:24.785 "num_base_bdevs_operational": 2, 00:08:24.785 "base_bdevs_list": [ 00:08:24.785 { 00:08:24.785 "name": null, 00:08:24.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.785 "is_configured": false, 00:08:24.785 "data_offset": 0, 00:08:24.785 "data_size": 65536 00:08:24.785 }, 00:08:24.785 { 00:08:24.785 "name": "BaseBdev2", 00:08:24.785 "uuid": "ad389d5d-f014-49b0-a7dd-922a96b08509", 00:08:24.785 "is_configured": true, 00:08:24.785 "data_offset": 0, 00:08:24.785 "data_size": 65536 00:08:24.785 }, 00:08:24.785 { 00:08:24.785 "name": "BaseBdev3", 00:08:24.785 "uuid": "d94235cf-67f6-4780-9772-dd9d047778bb", 00:08:24.785 "is_configured": true, 00:08:24.785 "data_offset": 0, 00:08:24.785 "data_size": 65536 00:08:24.785 } 00:08:24.785 ] 00:08:24.785 }' 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.785 02:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.047 [2024-12-07 02:41:36.071791] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:25.047 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.306 [2024-12-07 02:41:36.156558] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:25.306 [2024-12-07 02:41:36.156626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.306 BaseBdev2 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.306 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.307 [ 00:08:25.307 { 00:08:25.307 "name": "BaseBdev2", 00:08:25.307 "aliases": [ 00:08:25.307 "b50d63dd-08b8-448c-9237-4cad6b3245e5" 00:08:25.307 ], 00:08:25.307 "product_name": "Malloc disk", 00:08:25.307 "block_size": 512, 00:08:25.307 "num_blocks": 65536, 00:08:25.307 "uuid": "b50d63dd-08b8-448c-9237-4cad6b3245e5", 00:08:25.307 "assigned_rate_limits": { 00:08:25.307 "rw_ios_per_sec": 0, 00:08:25.307 "rw_mbytes_per_sec": 0, 00:08:25.307 "r_mbytes_per_sec": 0, 00:08:25.307 "w_mbytes_per_sec": 0 00:08:25.307 }, 00:08:25.307 "claimed": false, 00:08:25.307 "zoned": false, 00:08:25.307 "supported_io_types": { 00:08:25.307 "read": true, 00:08:25.307 "write": true, 00:08:25.307 "unmap": true, 00:08:25.307 "flush": true, 00:08:25.307 "reset": true, 00:08:25.307 "nvme_admin": false, 00:08:25.307 "nvme_io": false, 00:08:25.307 "nvme_io_md": false, 00:08:25.307 "write_zeroes": true, 00:08:25.307 "zcopy": true, 00:08:25.307 "get_zone_info": false, 00:08:25.307 "zone_management": false, 00:08:25.307 "zone_append": false, 00:08:25.307 "compare": false, 00:08:25.307 "compare_and_write": false, 00:08:25.307 "abort": true, 00:08:25.307 "seek_hole": false, 00:08:25.307 "seek_data": false, 00:08:25.307 "copy": true, 00:08:25.307 "nvme_iov_md": false 00:08:25.307 }, 00:08:25.307 "memory_domains": [ 00:08:25.307 { 00:08:25.307 "dma_device_id": "system", 00:08:25.307 "dma_device_type": 1 00:08:25.307 }, 00:08:25.307 { 00:08:25.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.307 "dma_device_type": 2 00:08:25.307 } 00:08:25.307 ], 00:08:25.307 "driver_specific": {} 00:08:25.307 } 00:08:25.307 ] 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.307 BaseBdev3 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.307 [ 00:08:25.307 { 00:08:25.307 "name": "BaseBdev3", 00:08:25.307 "aliases": [ 00:08:25.307 "b284b7ab-1b52-4187-a223-1a5970e363da" 00:08:25.307 ], 00:08:25.307 "product_name": "Malloc disk", 00:08:25.307 "block_size": 512, 00:08:25.307 "num_blocks": 65536, 00:08:25.307 "uuid": "b284b7ab-1b52-4187-a223-1a5970e363da", 00:08:25.307 "assigned_rate_limits": { 00:08:25.307 "rw_ios_per_sec": 0, 00:08:25.307 "rw_mbytes_per_sec": 0, 00:08:25.307 "r_mbytes_per_sec": 0, 00:08:25.307 "w_mbytes_per_sec": 0 00:08:25.307 }, 00:08:25.307 "claimed": false, 00:08:25.307 "zoned": false, 00:08:25.307 "supported_io_types": { 00:08:25.307 "read": true, 00:08:25.307 "write": true, 00:08:25.307 "unmap": true, 00:08:25.307 "flush": true, 00:08:25.307 "reset": true, 00:08:25.307 "nvme_admin": false, 00:08:25.307 "nvme_io": false, 00:08:25.307 "nvme_io_md": false, 00:08:25.307 "write_zeroes": true, 00:08:25.307 "zcopy": true, 00:08:25.307 "get_zone_info": false, 00:08:25.307 "zone_management": false, 00:08:25.307 "zone_append": false, 00:08:25.307 "compare": false, 00:08:25.307 "compare_and_write": false, 00:08:25.307 "abort": true, 00:08:25.307 "seek_hole": false, 00:08:25.307 "seek_data": false, 00:08:25.307 "copy": true, 00:08:25.307 "nvme_iov_md": false 00:08:25.307 }, 00:08:25.307 "memory_domains": [ 00:08:25.307 { 00:08:25.307 "dma_device_id": "system", 00:08:25.307 "dma_device_type": 1 00:08:25.307 }, 00:08:25.307 { 00:08:25.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.307 "dma_device_type": 2 00:08:25.307 } 00:08:25.307 ], 00:08:25.307 "driver_specific": {} 00:08:25.307 } 00:08:25.307 ] 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.307 [2024-12-07 02:41:36.354082] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.307 [2024-12-07 02:41:36.354199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.307 [2024-12-07 02:41:36.354242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.307 [2024-12-07 02:41:36.356369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.307 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.566 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.566 "name": "Existed_Raid", 00:08:25.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.566 "strip_size_kb": 64, 00:08:25.566 "state": "configuring", 00:08:25.566 "raid_level": "raid0", 00:08:25.566 "superblock": false, 00:08:25.566 "num_base_bdevs": 3, 00:08:25.566 "num_base_bdevs_discovered": 2, 00:08:25.566 "num_base_bdevs_operational": 3, 00:08:25.566 "base_bdevs_list": [ 00:08:25.566 { 00:08:25.566 "name": "BaseBdev1", 00:08:25.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.566 "is_configured": false, 00:08:25.566 "data_offset": 0, 00:08:25.566 "data_size": 0 00:08:25.566 }, 00:08:25.566 { 00:08:25.566 "name": "BaseBdev2", 00:08:25.566 "uuid": "b50d63dd-08b8-448c-9237-4cad6b3245e5", 00:08:25.566 "is_configured": true, 00:08:25.566 "data_offset": 0, 00:08:25.566 "data_size": 65536 00:08:25.566 }, 00:08:25.566 { 00:08:25.566 "name": "BaseBdev3", 00:08:25.566 "uuid": "b284b7ab-1b52-4187-a223-1a5970e363da", 00:08:25.566 "is_configured": true, 00:08:25.566 "data_offset": 0, 00:08:25.566 "data_size": 65536 00:08:25.566 } 00:08:25.566 ] 00:08:25.566 }' 00:08:25.566 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.566 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.826 [2024-12-07 02:41:36.793305] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.826 "name": "Existed_Raid", 00:08:25.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.826 "strip_size_kb": 64, 00:08:25.826 "state": "configuring", 00:08:25.826 "raid_level": "raid0", 00:08:25.826 "superblock": false, 00:08:25.826 "num_base_bdevs": 3, 00:08:25.826 "num_base_bdevs_discovered": 1, 00:08:25.826 "num_base_bdevs_operational": 3, 00:08:25.826 "base_bdevs_list": [ 00:08:25.826 { 00:08:25.826 "name": "BaseBdev1", 00:08:25.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.826 "is_configured": false, 00:08:25.826 "data_offset": 0, 00:08:25.826 "data_size": 0 00:08:25.826 }, 00:08:25.826 { 00:08:25.826 "name": null, 00:08:25.826 "uuid": "b50d63dd-08b8-448c-9237-4cad6b3245e5", 00:08:25.826 "is_configured": false, 00:08:25.826 "data_offset": 0, 00:08:25.826 "data_size": 65536 00:08:25.826 }, 00:08:25.826 { 00:08:25.826 "name": "BaseBdev3", 00:08:25.826 "uuid": "b284b7ab-1b52-4187-a223-1a5970e363da", 00:08:25.826 "is_configured": true, 00:08:25.826 "data_offset": 0, 00:08:25.826 "data_size": 65536 00:08:25.826 } 00:08:25.826 ] 00:08:25.826 }' 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.826 02:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.397 BaseBdev1 00:08:26.397 [2024-12-07 02:41:37.305196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.397 [ 00:08:26.397 { 00:08:26.397 "name": "BaseBdev1", 00:08:26.397 "aliases": [ 00:08:26.397 "49eb6068-2ddc-4344-8dde-7671a6c32ed1" 00:08:26.397 ], 00:08:26.397 "product_name": "Malloc disk", 00:08:26.397 "block_size": 512, 00:08:26.397 "num_blocks": 65536, 00:08:26.397 "uuid": "49eb6068-2ddc-4344-8dde-7671a6c32ed1", 00:08:26.397 "assigned_rate_limits": { 00:08:26.397 "rw_ios_per_sec": 0, 00:08:26.397 "rw_mbytes_per_sec": 0, 00:08:26.397 "r_mbytes_per_sec": 0, 00:08:26.397 "w_mbytes_per_sec": 0 00:08:26.397 }, 00:08:26.397 "claimed": true, 00:08:26.397 "claim_type": "exclusive_write", 00:08:26.397 "zoned": false, 00:08:26.397 "supported_io_types": { 00:08:26.397 "read": true, 00:08:26.397 "write": true, 00:08:26.397 "unmap": true, 00:08:26.397 "flush": true, 00:08:26.397 "reset": true, 00:08:26.397 "nvme_admin": false, 00:08:26.397 "nvme_io": false, 00:08:26.397 "nvme_io_md": false, 00:08:26.397 "write_zeroes": true, 00:08:26.397 "zcopy": true, 00:08:26.397 "get_zone_info": false, 00:08:26.397 "zone_management": false, 00:08:26.397 "zone_append": false, 00:08:26.397 "compare": false, 00:08:26.397 "compare_and_write": false, 00:08:26.397 "abort": true, 00:08:26.397 "seek_hole": false, 00:08:26.397 "seek_data": false, 00:08:26.397 "copy": true, 00:08:26.397 "nvme_iov_md": false 00:08:26.397 }, 00:08:26.397 "memory_domains": [ 00:08:26.397 { 00:08:26.397 "dma_device_id": "system", 00:08:26.397 "dma_device_type": 1 00:08:26.397 }, 00:08:26.397 { 00:08:26.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.397 "dma_device_type": 2 00:08:26.397 } 00:08:26.397 ], 00:08:26.397 "driver_specific": {} 00:08:26.397 } 00:08:26.397 ] 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.397 "name": "Existed_Raid", 00:08:26.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.397 "strip_size_kb": 64, 00:08:26.397 "state": "configuring", 00:08:26.397 "raid_level": "raid0", 00:08:26.397 "superblock": false, 00:08:26.397 "num_base_bdevs": 3, 00:08:26.397 "num_base_bdevs_discovered": 2, 00:08:26.397 "num_base_bdevs_operational": 3, 00:08:26.397 "base_bdevs_list": [ 00:08:26.397 { 00:08:26.397 "name": "BaseBdev1", 00:08:26.397 "uuid": "49eb6068-2ddc-4344-8dde-7671a6c32ed1", 00:08:26.397 "is_configured": true, 00:08:26.397 "data_offset": 0, 00:08:26.397 "data_size": 65536 00:08:26.397 }, 00:08:26.397 { 00:08:26.397 "name": null, 00:08:26.397 "uuid": "b50d63dd-08b8-448c-9237-4cad6b3245e5", 00:08:26.397 "is_configured": false, 00:08:26.397 "data_offset": 0, 00:08:26.397 "data_size": 65536 00:08:26.397 }, 00:08:26.397 { 00:08:26.397 "name": "BaseBdev3", 00:08:26.397 "uuid": "b284b7ab-1b52-4187-a223-1a5970e363da", 00:08:26.397 "is_configured": true, 00:08:26.397 "data_offset": 0, 00:08:26.397 "data_size": 65536 00:08:26.397 } 00:08:26.397 ] 00:08:26.397 }' 00:08:26.397 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.398 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.968 [2024-12-07 02:41:37.792418] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.968 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.969 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.969 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.969 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.969 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.969 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.969 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.969 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.969 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.969 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.969 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.969 "name": "Existed_Raid", 00:08:26.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.969 "strip_size_kb": 64, 00:08:26.969 "state": "configuring", 00:08:26.969 "raid_level": "raid0", 00:08:26.969 "superblock": false, 00:08:26.969 "num_base_bdevs": 3, 00:08:26.969 "num_base_bdevs_discovered": 1, 00:08:26.969 "num_base_bdevs_operational": 3, 00:08:26.969 "base_bdevs_list": [ 00:08:26.969 { 00:08:26.969 "name": "BaseBdev1", 00:08:26.969 "uuid": "49eb6068-2ddc-4344-8dde-7671a6c32ed1", 00:08:26.969 "is_configured": true, 00:08:26.969 "data_offset": 0, 00:08:26.969 "data_size": 65536 00:08:26.969 }, 00:08:26.969 { 00:08:26.969 "name": null, 00:08:26.969 "uuid": "b50d63dd-08b8-448c-9237-4cad6b3245e5", 00:08:26.969 "is_configured": false, 00:08:26.969 "data_offset": 0, 00:08:26.969 "data_size": 65536 00:08:26.969 }, 00:08:26.969 { 00:08:26.969 "name": null, 00:08:26.969 "uuid": "b284b7ab-1b52-4187-a223-1a5970e363da", 00:08:26.969 "is_configured": false, 00:08:26.969 "data_offset": 0, 00:08:26.969 "data_size": 65536 00:08:26.969 } 00:08:26.969 ] 00:08:26.969 }' 00:08:26.969 02:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.969 02:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.229 [2024-12-07 02:41:38.287744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.229 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.489 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.489 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.489 "name": "Existed_Raid", 00:08:27.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.489 "strip_size_kb": 64, 00:08:27.489 "state": "configuring", 00:08:27.489 "raid_level": "raid0", 00:08:27.489 "superblock": false, 00:08:27.489 "num_base_bdevs": 3, 00:08:27.489 "num_base_bdevs_discovered": 2, 00:08:27.489 "num_base_bdevs_operational": 3, 00:08:27.489 "base_bdevs_list": [ 00:08:27.489 { 00:08:27.489 "name": "BaseBdev1", 00:08:27.489 "uuid": "49eb6068-2ddc-4344-8dde-7671a6c32ed1", 00:08:27.489 "is_configured": true, 00:08:27.489 "data_offset": 0, 00:08:27.489 "data_size": 65536 00:08:27.489 }, 00:08:27.489 { 00:08:27.489 "name": null, 00:08:27.489 "uuid": "b50d63dd-08b8-448c-9237-4cad6b3245e5", 00:08:27.489 "is_configured": false, 00:08:27.489 "data_offset": 0, 00:08:27.489 "data_size": 65536 00:08:27.489 }, 00:08:27.489 { 00:08:27.489 "name": "BaseBdev3", 00:08:27.489 "uuid": "b284b7ab-1b52-4187-a223-1a5970e363da", 00:08:27.489 "is_configured": true, 00:08:27.489 "data_offset": 0, 00:08:27.489 "data_size": 65536 00:08:27.489 } 00:08:27.489 ] 00:08:27.489 }' 00:08:27.489 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.489 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.748 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.748 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.748 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.748 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:27.748 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.748 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:27.748 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:27.748 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.748 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.748 [2024-12-07 02:41:38.806932] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.008 "name": "Existed_Raid", 00:08:28.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.008 "strip_size_kb": 64, 00:08:28.008 "state": "configuring", 00:08:28.008 "raid_level": "raid0", 00:08:28.008 "superblock": false, 00:08:28.008 "num_base_bdevs": 3, 00:08:28.008 "num_base_bdevs_discovered": 1, 00:08:28.008 "num_base_bdevs_operational": 3, 00:08:28.008 "base_bdevs_list": [ 00:08:28.008 { 00:08:28.008 "name": null, 00:08:28.008 "uuid": "49eb6068-2ddc-4344-8dde-7671a6c32ed1", 00:08:28.008 "is_configured": false, 00:08:28.008 "data_offset": 0, 00:08:28.008 "data_size": 65536 00:08:28.008 }, 00:08:28.008 { 00:08:28.008 "name": null, 00:08:28.008 "uuid": "b50d63dd-08b8-448c-9237-4cad6b3245e5", 00:08:28.008 "is_configured": false, 00:08:28.008 "data_offset": 0, 00:08:28.008 "data_size": 65536 00:08:28.008 }, 00:08:28.008 { 00:08:28.008 "name": "BaseBdev3", 00:08:28.008 "uuid": "b284b7ab-1b52-4187-a223-1a5970e363da", 00:08:28.008 "is_configured": true, 00:08:28.008 "data_offset": 0, 00:08:28.008 "data_size": 65536 00:08:28.008 } 00:08:28.008 ] 00:08:28.008 }' 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.008 02:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.268 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.268 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.268 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.268 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:28.268 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.268 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:28.268 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:28.268 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.268 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.268 [2024-12-07 02:41:39.333881] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.268 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.269 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:28.269 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.269 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.269 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.269 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.269 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.269 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.269 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.269 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.269 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.269 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.269 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.269 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.528 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.528 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.528 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.528 "name": "Existed_Raid", 00:08:28.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.528 "strip_size_kb": 64, 00:08:28.528 "state": "configuring", 00:08:28.528 "raid_level": "raid0", 00:08:28.528 "superblock": false, 00:08:28.528 "num_base_bdevs": 3, 00:08:28.528 "num_base_bdevs_discovered": 2, 00:08:28.528 "num_base_bdevs_operational": 3, 00:08:28.528 "base_bdevs_list": [ 00:08:28.528 { 00:08:28.528 "name": null, 00:08:28.528 "uuid": "49eb6068-2ddc-4344-8dde-7671a6c32ed1", 00:08:28.528 "is_configured": false, 00:08:28.528 "data_offset": 0, 00:08:28.528 "data_size": 65536 00:08:28.528 }, 00:08:28.528 { 00:08:28.528 "name": "BaseBdev2", 00:08:28.528 "uuid": "b50d63dd-08b8-448c-9237-4cad6b3245e5", 00:08:28.528 "is_configured": true, 00:08:28.528 "data_offset": 0, 00:08:28.528 "data_size": 65536 00:08:28.528 }, 00:08:28.528 { 00:08:28.528 "name": "BaseBdev3", 00:08:28.528 "uuid": "b284b7ab-1b52-4187-a223-1a5970e363da", 00:08:28.528 "is_configured": true, 00:08:28.528 "data_offset": 0, 00:08:28.528 "data_size": 65536 00:08:28.528 } 00:08:28.528 ] 00:08:28.528 }' 00:08:28.528 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.528 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.787 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.787 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:28.787 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.787 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.787 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.787 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:28.787 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.787 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:28.787 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.787 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 49eb6068-2ddc-4344-8dde-7671a6c32ed1 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.047 [2024-12-07 02:41:39.921621] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:29.047 [2024-12-07 02:41:39.921719] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:29.047 [2024-12-07 02:41:39.921766] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:29.047 [2024-12-07 02:41:39.922077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:29.047 [2024-12-07 02:41:39.922248] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:29.047 [2024-12-07 02:41:39.922284] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:29.047 [2024-12-07 02:41:39.922531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.047 NewBaseBdev 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.047 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.047 [ 00:08:29.048 { 00:08:29.048 "name": "NewBaseBdev", 00:08:29.048 "aliases": [ 00:08:29.048 "49eb6068-2ddc-4344-8dde-7671a6c32ed1" 00:08:29.048 ], 00:08:29.048 "product_name": "Malloc disk", 00:08:29.048 "block_size": 512, 00:08:29.048 "num_blocks": 65536, 00:08:29.048 "uuid": "49eb6068-2ddc-4344-8dde-7671a6c32ed1", 00:08:29.048 "assigned_rate_limits": { 00:08:29.048 "rw_ios_per_sec": 0, 00:08:29.048 "rw_mbytes_per_sec": 0, 00:08:29.048 "r_mbytes_per_sec": 0, 00:08:29.048 "w_mbytes_per_sec": 0 00:08:29.048 }, 00:08:29.048 "claimed": true, 00:08:29.048 "claim_type": "exclusive_write", 00:08:29.048 "zoned": false, 00:08:29.048 "supported_io_types": { 00:08:29.048 "read": true, 00:08:29.048 "write": true, 00:08:29.048 "unmap": true, 00:08:29.048 "flush": true, 00:08:29.048 "reset": true, 00:08:29.048 "nvme_admin": false, 00:08:29.048 "nvme_io": false, 00:08:29.048 "nvme_io_md": false, 00:08:29.048 "write_zeroes": true, 00:08:29.048 "zcopy": true, 00:08:29.048 "get_zone_info": false, 00:08:29.048 "zone_management": false, 00:08:29.048 "zone_append": false, 00:08:29.048 "compare": false, 00:08:29.048 "compare_and_write": false, 00:08:29.048 "abort": true, 00:08:29.048 "seek_hole": false, 00:08:29.048 "seek_data": false, 00:08:29.048 "copy": true, 00:08:29.048 "nvme_iov_md": false 00:08:29.048 }, 00:08:29.048 "memory_domains": [ 00:08:29.048 { 00:08:29.048 "dma_device_id": "system", 00:08:29.048 "dma_device_type": 1 00:08:29.048 }, 00:08:29.048 { 00:08:29.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.048 "dma_device_type": 2 00:08:29.048 } 00:08:29.048 ], 00:08:29.048 "driver_specific": {} 00:08:29.048 } 00:08:29.048 ] 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.048 02:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.048 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.048 "name": "Existed_Raid", 00:08:29.048 "uuid": "13eaccca-23f7-4387-b725-033151dd18b4", 00:08:29.048 "strip_size_kb": 64, 00:08:29.048 "state": "online", 00:08:29.048 "raid_level": "raid0", 00:08:29.048 "superblock": false, 00:08:29.048 "num_base_bdevs": 3, 00:08:29.048 "num_base_bdevs_discovered": 3, 00:08:29.048 "num_base_bdevs_operational": 3, 00:08:29.048 "base_bdevs_list": [ 00:08:29.048 { 00:08:29.048 "name": "NewBaseBdev", 00:08:29.048 "uuid": "49eb6068-2ddc-4344-8dde-7671a6c32ed1", 00:08:29.048 "is_configured": true, 00:08:29.048 "data_offset": 0, 00:08:29.048 "data_size": 65536 00:08:29.048 }, 00:08:29.048 { 00:08:29.048 "name": "BaseBdev2", 00:08:29.048 "uuid": "b50d63dd-08b8-448c-9237-4cad6b3245e5", 00:08:29.048 "is_configured": true, 00:08:29.048 "data_offset": 0, 00:08:29.048 "data_size": 65536 00:08:29.048 }, 00:08:29.048 { 00:08:29.048 "name": "BaseBdev3", 00:08:29.048 "uuid": "b284b7ab-1b52-4187-a223-1a5970e363da", 00:08:29.048 "is_configured": true, 00:08:29.048 "data_offset": 0, 00:08:29.048 "data_size": 65536 00:08:29.048 } 00:08:29.048 ] 00:08:29.048 }' 00:08:29.048 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.048 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.618 [2024-12-07 02:41:40.405071] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.618 "name": "Existed_Raid", 00:08:29.618 "aliases": [ 00:08:29.618 "13eaccca-23f7-4387-b725-033151dd18b4" 00:08:29.618 ], 00:08:29.618 "product_name": "Raid Volume", 00:08:29.618 "block_size": 512, 00:08:29.618 "num_blocks": 196608, 00:08:29.618 "uuid": "13eaccca-23f7-4387-b725-033151dd18b4", 00:08:29.618 "assigned_rate_limits": { 00:08:29.618 "rw_ios_per_sec": 0, 00:08:29.618 "rw_mbytes_per_sec": 0, 00:08:29.618 "r_mbytes_per_sec": 0, 00:08:29.618 "w_mbytes_per_sec": 0 00:08:29.618 }, 00:08:29.618 "claimed": false, 00:08:29.618 "zoned": false, 00:08:29.618 "supported_io_types": { 00:08:29.618 "read": true, 00:08:29.618 "write": true, 00:08:29.618 "unmap": true, 00:08:29.618 "flush": true, 00:08:29.618 "reset": true, 00:08:29.618 "nvme_admin": false, 00:08:29.618 "nvme_io": false, 00:08:29.618 "nvme_io_md": false, 00:08:29.618 "write_zeroes": true, 00:08:29.618 "zcopy": false, 00:08:29.618 "get_zone_info": false, 00:08:29.618 "zone_management": false, 00:08:29.618 "zone_append": false, 00:08:29.618 "compare": false, 00:08:29.618 "compare_and_write": false, 00:08:29.618 "abort": false, 00:08:29.618 "seek_hole": false, 00:08:29.618 "seek_data": false, 00:08:29.618 "copy": false, 00:08:29.618 "nvme_iov_md": false 00:08:29.618 }, 00:08:29.618 "memory_domains": [ 00:08:29.618 { 00:08:29.618 "dma_device_id": "system", 00:08:29.618 "dma_device_type": 1 00:08:29.618 }, 00:08:29.618 { 00:08:29.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.618 "dma_device_type": 2 00:08:29.618 }, 00:08:29.618 { 00:08:29.618 "dma_device_id": "system", 00:08:29.618 "dma_device_type": 1 00:08:29.618 }, 00:08:29.618 { 00:08:29.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.618 "dma_device_type": 2 00:08:29.618 }, 00:08:29.618 { 00:08:29.618 "dma_device_id": "system", 00:08:29.618 "dma_device_type": 1 00:08:29.618 }, 00:08:29.618 { 00:08:29.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.618 "dma_device_type": 2 00:08:29.618 } 00:08:29.618 ], 00:08:29.618 "driver_specific": { 00:08:29.618 "raid": { 00:08:29.618 "uuid": "13eaccca-23f7-4387-b725-033151dd18b4", 00:08:29.618 "strip_size_kb": 64, 00:08:29.618 "state": "online", 00:08:29.618 "raid_level": "raid0", 00:08:29.618 "superblock": false, 00:08:29.618 "num_base_bdevs": 3, 00:08:29.618 "num_base_bdevs_discovered": 3, 00:08:29.618 "num_base_bdevs_operational": 3, 00:08:29.618 "base_bdevs_list": [ 00:08:29.618 { 00:08:29.618 "name": "NewBaseBdev", 00:08:29.618 "uuid": "49eb6068-2ddc-4344-8dde-7671a6c32ed1", 00:08:29.618 "is_configured": true, 00:08:29.618 "data_offset": 0, 00:08:29.618 "data_size": 65536 00:08:29.618 }, 00:08:29.618 { 00:08:29.618 "name": "BaseBdev2", 00:08:29.618 "uuid": "b50d63dd-08b8-448c-9237-4cad6b3245e5", 00:08:29.618 "is_configured": true, 00:08:29.618 "data_offset": 0, 00:08:29.618 "data_size": 65536 00:08:29.618 }, 00:08:29.618 { 00:08:29.618 "name": "BaseBdev3", 00:08:29.618 "uuid": "b284b7ab-1b52-4187-a223-1a5970e363da", 00:08:29.618 "is_configured": true, 00:08:29.618 "data_offset": 0, 00:08:29.618 "data_size": 65536 00:08:29.618 } 00:08:29.618 ] 00:08:29.618 } 00:08:29.618 } 00:08:29.618 }' 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:29.618 BaseBdev2 00:08:29.618 BaseBdev3' 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.618 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.619 [2024-12-07 02:41:40.668322] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.619 [2024-12-07 02:41:40.668387] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.619 [2024-12-07 02:41:40.668463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.619 [2024-12-07 02:41:40.668518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.619 [2024-12-07 02:41:40.668529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75230 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 75230 ']' 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 75230 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.619 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75230 00:08:29.879 killing process with pid 75230 00:08:29.879 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.879 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.879 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75230' 00:08:29.879 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 75230 00:08:29.879 [2024-12-07 02:41:40.716563] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.879 02:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 75230 00:08:29.879 [2024-12-07 02:41:40.775707] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.139 02:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:30.139 00:08:30.139 real 0m9.230s 00:08:30.139 user 0m15.447s 00:08:30.139 sys 0m1.952s 00:08:30.139 ************************************ 00:08:30.139 END TEST raid_state_function_test 00:08:30.139 ************************************ 00:08:30.139 02:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.139 02:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.139 02:41:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:30.139 02:41:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:30.139 02:41:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.139 02:41:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.400 ************************************ 00:08:30.400 START TEST raid_state_function_test_sb 00:08:30.400 ************************************ 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75835 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75835' 00:08:30.400 Process raid pid: 75835 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75835 00:08:30.400 02:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75835 ']' 00:08:30.401 02:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.401 02:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.401 02:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.401 02:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.401 02:41:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.401 [2024-12-07 02:41:41.316016] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:30.401 [2024-12-07 02:41:41.316144] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.660 [2024-12-07 02:41:41.481815] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.661 [2024-12-07 02:41:41.550697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.661 [2024-12-07 02:41:41.626534] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.661 [2024-12-07 02:41:41.626570] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.229 [2024-12-07 02:41:42.137745] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.229 [2024-12-07 02:41:42.137812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.229 [2024-12-07 02:41:42.137827] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.229 [2024-12-07 02:41:42.137838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.229 [2024-12-07 02:41:42.137844] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:31.229 [2024-12-07 02:41:42.137856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.229 "name": "Existed_Raid", 00:08:31.229 "uuid": "7a096d8c-d38b-4425-8ab6-4b684aa0f8a7", 00:08:31.229 "strip_size_kb": 64, 00:08:31.229 "state": "configuring", 00:08:31.229 "raid_level": "raid0", 00:08:31.229 "superblock": true, 00:08:31.229 "num_base_bdevs": 3, 00:08:31.229 "num_base_bdevs_discovered": 0, 00:08:31.229 "num_base_bdevs_operational": 3, 00:08:31.229 "base_bdevs_list": [ 00:08:31.229 { 00:08:31.229 "name": "BaseBdev1", 00:08:31.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.229 "is_configured": false, 00:08:31.229 "data_offset": 0, 00:08:31.229 "data_size": 0 00:08:31.229 }, 00:08:31.229 { 00:08:31.229 "name": "BaseBdev2", 00:08:31.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.229 "is_configured": false, 00:08:31.229 "data_offset": 0, 00:08:31.229 "data_size": 0 00:08:31.229 }, 00:08:31.229 { 00:08:31.229 "name": "BaseBdev3", 00:08:31.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.229 "is_configured": false, 00:08:31.229 "data_offset": 0, 00:08:31.229 "data_size": 0 00:08:31.229 } 00:08:31.229 ] 00:08:31.229 }' 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.229 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.798 [2024-12-07 02:41:42.604812] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:31.798 [2024-12-07 02:41:42.604914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.798 [2024-12-07 02:41:42.616834] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.798 [2024-12-07 02:41:42.616910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.798 [2024-12-07 02:41:42.616935] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.798 [2024-12-07 02:41:42.616958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.798 [2024-12-07 02:41:42.616975] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:31.798 [2024-12-07 02:41:42.616996] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.798 [2024-12-07 02:41:42.643817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.798 BaseBdev1 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:31.798 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.799 [ 00:08:31.799 { 00:08:31.799 "name": "BaseBdev1", 00:08:31.799 "aliases": [ 00:08:31.799 "34d8f5ce-9cff-4bd1-8c0f-077e5f4b57cc" 00:08:31.799 ], 00:08:31.799 "product_name": "Malloc disk", 00:08:31.799 "block_size": 512, 00:08:31.799 "num_blocks": 65536, 00:08:31.799 "uuid": "34d8f5ce-9cff-4bd1-8c0f-077e5f4b57cc", 00:08:31.799 "assigned_rate_limits": { 00:08:31.799 "rw_ios_per_sec": 0, 00:08:31.799 "rw_mbytes_per_sec": 0, 00:08:31.799 "r_mbytes_per_sec": 0, 00:08:31.799 "w_mbytes_per_sec": 0 00:08:31.799 }, 00:08:31.799 "claimed": true, 00:08:31.799 "claim_type": "exclusive_write", 00:08:31.799 "zoned": false, 00:08:31.799 "supported_io_types": { 00:08:31.799 "read": true, 00:08:31.799 "write": true, 00:08:31.799 "unmap": true, 00:08:31.799 "flush": true, 00:08:31.799 "reset": true, 00:08:31.799 "nvme_admin": false, 00:08:31.799 "nvme_io": false, 00:08:31.799 "nvme_io_md": false, 00:08:31.799 "write_zeroes": true, 00:08:31.799 "zcopy": true, 00:08:31.799 "get_zone_info": false, 00:08:31.799 "zone_management": false, 00:08:31.799 "zone_append": false, 00:08:31.799 "compare": false, 00:08:31.799 "compare_and_write": false, 00:08:31.799 "abort": true, 00:08:31.799 "seek_hole": false, 00:08:31.799 "seek_data": false, 00:08:31.799 "copy": true, 00:08:31.799 "nvme_iov_md": false 00:08:31.799 }, 00:08:31.799 "memory_domains": [ 00:08:31.799 { 00:08:31.799 "dma_device_id": "system", 00:08:31.799 "dma_device_type": 1 00:08:31.799 }, 00:08:31.799 { 00:08:31.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.799 "dma_device_type": 2 00:08:31.799 } 00:08:31.799 ], 00:08:31.799 "driver_specific": {} 00:08:31.799 } 00:08:31.799 ] 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.799 "name": "Existed_Raid", 00:08:31.799 "uuid": "e8b58113-0203-42a8-b068-5bd7233cac10", 00:08:31.799 "strip_size_kb": 64, 00:08:31.799 "state": "configuring", 00:08:31.799 "raid_level": "raid0", 00:08:31.799 "superblock": true, 00:08:31.799 "num_base_bdevs": 3, 00:08:31.799 "num_base_bdevs_discovered": 1, 00:08:31.799 "num_base_bdevs_operational": 3, 00:08:31.799 "base_bdevs_list": [ 00:08:31.799 { 00:08:31.799 "name": "BaseBdev1", 00:08:31.799 "uuid": "34d8f5ce-9cff-4bd1-8c0f-077e5f4b57cc", 00:08:31.799 "is_configured": true, 00:08:31.799 "data_offset": 2048, 00:08:31.799 "data_size": 63488 00:08:31.799 }, 00:08:31.799 { 00:08:31.799 "name": "BaseBdev2", 00:08:31.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.799 "is_configured": false, 00:08:31.799 "data_offset": 0, 00:08:31.799 "data_size": 0 00:08:31.799 }, 00:08:31.799 { 00:08:31.799 "name": "BaseBdev3", 00:08:31.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.799 "is_configured": false, 00:08:31.799 "data_offset": 0, 00:08:31.799 "data_size": 0 00:08:31.799 } 00:08:31.799 ] 00:08:31.799 }' 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.799 02:41:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.058 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.058 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.058 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.058 [2024-12-07 02:41:43.127014] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.058 [2024-12-07 02:41:43.127073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:32.058 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.058 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.058 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.058 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.317 [2024-12-07 02:41:43.135024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.317 [2024-12-07 02:41:43.137206] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.317 [2024-12-07 02:41:43.137296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.317 [2024-12-07 02:41:43.137310] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:32.317 [2024-12-07 02:41:43.137321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.317 "name": "Existed_Raid", 00:08:32.317 "uuid": "9fef52f3-0e98-4026-9820-f2cfd2bb3298", 00:08:32.317 "strip_size_kb": 64, 00:08:32.317 "state": "configuring", 00:08:32.317 "raid_level": "raid0", 00:08:32.317 "superblock": true, 00:08:32.317 "num_base_bdevs": 3, 00:08:32.317 "num_base_bdevs_discovered": 1, 00:08:32.317 "num_base_bdevs_operational": 3, 00:08:32.317 "base_bdevs_list": [ 00:08:32.317 { 00:08:32.317 "name": "BaseBdev1", 00:08:32.317 "uuid": "34d8f5ce-9cff-4bd1-8c0f-077e5f4b57cc", 00:08:32.317 "is_configured": true, 00:08:32.317 "data_offset": 2048, 00:08:32.317 "data_size": 63488 00:08:32.317 }, 00:08:32.317 { 00:08:32.317 "name": "BaseBdev2", 00:08:32.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.317 "is_configured": false, 00:08:32.317 "data_offset": 0, 00:08:32.317 "data_size": 0 00:08:32.317 }, 00:08:32.317 { 00:08:32.317 "name": "BaseBdev3", 00:08:32.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.317 "is_configured": false, 00:08:32.317 "data_offset": 0, 00:08:32.317 "data_size": 0 00:08:32.317 } 00:08:32.317 ] 00:08:32.317 }' 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.317 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.577 [2024-12-07 02:41:43.621710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:32.577 BaseBdev2 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.577 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.577 [ 00:08:32.577 { 00:08:32.577 "name": "BaseBdev2", 00:08:32.577 "aliases": [ 00:08:32.577 "21249cc6-5bb9-40ec-86f4-b967607043cf" 00:08:32.577 ], 00:08:32.577 "product_name": "Malloc disk", 00:08:32.577 "block_size": 512, 00:08:32.577 "num_blocks": 65536, 00:08:32.577 "uuid": "21249cc6-5bb9-40ec-86f4-b967607043cf", 00:08:32.577 "assigned_rate_limits": { 00:08:32.577 "rw_ios_per_sec": 0, 00:08:32.577 "rw_mbytes_per_sec": 0, 00:08:32.577 "r_mbytes_per_sec": 0, 00:08:32.577 "w_mbytes_per_sec": 0 00:08:32.577 }, 00:08:32.578 "claimed": true, 00:08:32.578 "claim_type": "exclusive_write", 00:08:32.836 "zoned": false, 00:08:32.836 "supported_io_types": { 00:08:32.836 "read": true, 00:08:32.836 "write": true, 00:08:32.836 "unmap": true, 00:08:32.836 "flush": true, 00:08:32.836 "reset": true, 00:08:32.836 "nvme_admin": false, 00:08:32.836 "nvme_io": false, 00:08:32.836 "nvme_io_md": false, 00:08:32.836 "write_zeroes": true, 00:08:32.836 "zcopy": true, 00:08:32.836 "get_zone_info": false, 00:08:32.836 "zone_management": false, 00:08:32.836 "zone_append": false, 00:08:32.836 "compare": false, 00:08:32.836 "compare_and_write": false, 00:08:32.836 "abort": true, 00:08:32.836 "seek_hole": false, 00:08:32.836 "seek_data": false, 00:08:32.836 "copy": true, 00:08:32.836 "nvme_iov_md": false 00:08:32.836 }, 00:08:32.836 "memory_domains": [ 00:08:32.836 { 00:08:32.836 "dma_device_id": "system", 00:08:32.836 "dma_device_type": 1 00:08:32.836 }, 00:08:32.836 { 00:08:32.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.836 "dma_device_type": 2 00:08:32.836 } 00:08:32.836 ], 00:08:32.836 "driver_specific": {} 00:08:32.836 } 00:08:32.836 ] 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.836 "name": "Existed_Raid", 00:08:32.836 "uuid": "9fef52f3-0e98-4026-9820-f2cfd2bb3298", 00:08:32.836 "strip_size_kb": 64, 00:08:32.836 "state": "configuring", 00:08:32.836 "raid_level": "raid0", 00:08:32.836 "superblock": true, 00:08:32.836 "num_base_bdevs": 3, 00:08:32.836 "num_base_bdevs_discovered": 2, 00:08:32.836 "num_base_bdevs_operational": 3, 00:08:32.836 "base_bdevs_list": [ 00:08:32.836 { 00:08:32.836 "name": "BaseBdev1", 00:08:32.836 "uuid": "34d8f5ce-9cff-4bd1-8c0f-077e5f4b57cc", 00:08:32.836 "is_configured": true, 00:08:32.836 "data_offset": 2048, 00:08:32.836 "data_size": 63488 00:08:32.836 }, 00:08:32.836 { 00:08:32.836 "name": "BaseBdev2", 00:08:32.836 "uuid": "21249cc6-5bb9-40ec-86f4-b967607043cf", 00:08:32.836 "is_configured": true, 00:08:32.836 "data_offset": 2048, 00:08:32.836 "data_size": 63488 00:08:32.836 }, 00:08:32.836 { 00:08:32.836 "name": "BaseBdev3", 00:08:32.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.836 "is_configured": false, 00:08:32.836 "data_offset": 0, 00:08:32.836 "data_size": 0 00:08:32.836 } 00:08:32.836 ] 00:08:32.836 }' 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.836 02:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.095 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:33.095 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.095 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.095 [2024-12-07 02:41:44.069658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.095 [2024-12-07 02:41:44.069863] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:33.095 [2024-12-07 02:41:44.069885] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:33.095 [2024-12-07 02:41:44.070192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:33.095 BaseBdev3 00:08:33.095 [2024-12-07 02:41:44.070324] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:33.095 [2024-12-07 02:41:44.070340] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:33.095 [2024-12-07 02:41:44.070484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.095 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.095 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:33.095 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:33.095 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.095 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:33.095 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.095 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.095 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.095 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 [ 00:08:33.096 { 00:08:33.096 "name": "BaseBdev3", 00:08:33.096 "aliases": [ 00:08:33.096 "bfb9a019-8a13-4efd-95b8-ec8f5abe4a5a" 00:08:33.096 ], 00:08:33.096 "product_name": "Malloc disk", 00:08:33.096 "block_size": 512, 00:08:33.096 "num_blocks": 65536, 00:08:33.096 "uuid": "bfb9a019-8a13-4efd-95b8-ec8f5abe4a5a", 00:08:33.096 "assigned_rate_limits": { 00:08:33.096 "rw_ios_per_sec": 0, 00:08:33.096 "rw_mbytes_per_sec": 0, 00:08:33.096 "r_mbytes_per_sec": 0, 00:08:33.096 "w_mbytes_per_sec": 0 00:08:33.096 }, 00:08:33.096 "claimed": true, 00:08:33.096 "claim_type": "exclusive_write", 00:08:33.096 "zoned": false, 00:08:33.096 "supported_io_types": { 00:08:33.096 "read": true, 00:08:33.096 "write": true, 00:08:33.096 "unmap": true, 00:08:33.096 "flush": true, 00:08:33.096 "reset": true, 00:08:33.096 "nvme_admin": false, 00:08:33.096 "nvme_io": false, 00:08:33.096 "nvme_io_md": false, 00:08:33.096 "write_zeroes": true, 00:08:33.096 "zcopy": true, 00:08:33.096 "get_zone_info": false, 00:08:33.096 "zone_management": false, 00:08:33.096 "zone_append": false, 00:08:33.096 "compare": false, 00:08:33.096 "compare_and_write": false, 00:08:33.096 "abort": true, 00:08:33.096 "seek_hole": false, 00:08:33.096 "seek_data": false, 00:08:33.096 "copy": true, 00:08:33.096 "nvme_iov_md": false 00:08:33.096 }, 00:08:33.096 "memory_domains": [ 00:08:33.096 { 00:08:33.096 "dma_device_id": "system", 00:08:33.096 "dma_device_type": 1 00:08:33.096 }, 00:08:33.096 { 00:08:33.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.096 "dma_device_type": 2 00:08:33.096 } 00:08:33.096 ], 00:08:33.096 "driver_specific": {} 00:08:33.096 } 00:08:33.096 ] 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.096 "name": "Existed_Raid", 00:08:33.096 "uuid": "9fef52f3-0e98-4026-9820-f2cfd2bb3298", 00:08:33.096 "strip_size_kb": 64, 00:08:33.096 "state": "online", 00:08:33.096 "raid_level": "raid0", 00:08:33.096 "superblock": true, 00:08:33.096 "num_base_bdevs": 3, 00:08:33.096 "num_base_bdevs_discovered": 3, 00:08:33.096 "num_base_bdevs_operational": 3, 00:08:33.096 "base_bdevs_list": [ 00:08:33.096 { 00:08:33.096 "name": "BaseBdev1", 00:08:33.096 "uuid": "34d8f5ce-9cff-4bd1-8c0f-077e5f4b57cc", 00:08:33.096 "is_configured": true, 00:08:33.096 "data_offset": 2048, 00:08:33.096 "data_size": 63488 00:08:33.096 }, 00:08:33.096 { 00:08:33.096 "name": "BaseBdev2", 00:08:33.096 "uuid": "21249cc6-5bb9-40ec-86f4-b967607043cf", 00:08:33.096 "is_configured": true, 00:08:33.096 "data_offset": 2048, 00:08:33.096 "data_size": 63488 00:08:33.096 }, 00:08:33.096 { 00:08:33.096 "name": "BaseBdev3", 00:08:33.096 "uuid": "bfb9a019-8a13-4efd-95b8-ec8f5abe4a5a", 00:08:33.096 "is_configured": true, 00:08:33.096 "data_offset": 2048, 00:08:33.096 "data_size": 63488 00:08:33.096 } 00:08:33.096 ] 00:08:33.096 }' 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.096 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.664 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:33.664 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:33.664 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.664 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.664 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.664 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.664 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:33.664 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.664 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.664 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.664 [2024-12-07 02:41:44.541115] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.664 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.664 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.664 "name": "Existed_Raid", 00:08:33.664 "aliases": [ 00:08:33.664 "9fef52f3-0e98-4026-9820-f2cfd2bb3298" 00:08:33.664 ], 00:08:33.664 "product_name": "Raid Volume", 00:08:33.664 "block_size": 512, 00:08:33.664 "num_blocks": 190464, 00:08:33.664 "uuid": "9fef52f3-0e98-4026-9820-f2cfd2bb3298", 00:08:33.664 "assigned_rate_limits": { 00:08:33.664 "rw_ios_per_sec": 0, 00:08:33.664 "rw_mbytes_per_sec": 0, 00:08:33.664 "r_mbytes_per_sec": 0, 00:08:33.664 "w_mbytes_per_sec": 0 00:08:33.664 }, 00:08:33.664 "claimed": false, 00:08:33.664 "zoned": false, 00:08:33.664 "supported_io_types": { 00:08:33.664 "read": true, 00:08:33.664 "write": true, 00:08:33.664 "unmap": true, 00:08:33.664 "flush": true, 00:08:33.664 "reset": true, 00:08:33.664 "nvme_admin": false, 00:08:33.664 "nvme_io": false, 00:08:33.664 "nvme_io_md": false, 00:08:33.664 "write_zeroes": true, 00:08:33.665 "zcopy": false, 00:08:33.665 "get_zone_info": false, 00:08:33.665 "zone_management": false, 00:08:33.665 "zone_append": false, 00:08:33.665 "compare": false, 00:08:33.665 "compare_and_write": false, 00:08:33.665 "abort": false, 00:08:33.665 "seek_hole": false, 00:08:33.665 "seek_data": false, 00:08:33.665 "copy": false, 00:08:33.665 "nvme_iov_md": false 00:08:33.665 }, 00:08:33.665 "memory_domains": [ 00:08:33.665 { 00:08:33.665 "dma_device_id": "system", 00:08:33.665 "dma_device_type": 1 00:08:33.665 }, 00:08:33.665 { 00:08:33.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.665 "dma_device_type": 2 00:08:33.665 }, 00:08:33.665 { 00:08:33.665 "dma_device_id": "system", 00:08:33.665 "dma_device_type": 1 00:08:33.665 }, 00:08:33.665 { 00:08:33.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.665 "dma_device_type": 2 00:08:33.665 }, 00:08:33.665 { 00:08:33.665 "dma_device_id": "system", 00:08:33.665 "dma_device_type": 1 00:08:33.665 }, 00:08:33.665 { 00:08:33.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.665 "dma_device_type": 2 00:08:33.665 } 00:08:33.665 ], 00:08:33.665 "driver_specific": { 00:08:33.665 "raid": { 00:08:33.665 "uuid": "9fef52f3-0e98-4026-9820-f2cfd2bb3298", 00:08:33.665 "strip_size_kb": 64, 00:08:33.665 "state": "online", 00:08:33.665 "raid_level": "raid0", 00:08:33.665 "superblock": true, 00:08:33.665 "num_base_bdevs": 3, 00:08:33.665 "num_base_bdevs_discovered": 3, 00:08:33.665 "num_base_bdevs_operational": 3, 00:08:33.665 "base_bdevs_list": [ 00:08:33.665 { 00:08:33.665 "name": "BaseBdev1", 00:08:33.665 "uuid": "34d8f5ce-9cff-4bd1-8c0f-077e5f4b57cc", 00:08:33.665 "is_configured": true, 00:08:33.665 "data_offset": 2048, 00:08:33.665 "data_size": 63488 00:08:33.665 }, 00:08:33.665 { 00:08:33.665 "name": "BaseBdev2", 00:08:33.665 "uuid": "21249cc6-5bb9-40ec-86f4-b967607043cf", 00:08:33.665 "is_configured": true, 00:08:33.665 "data_offset": 2048, 00:08:33.665 "data_size": 63488 00:08:33.665 }, 00:08:33.665 { 00:08:33.665 "name": "BaseBdev3", 00:08:33.665 "uuid": "bfb9a019-8a13-4efd-95b8-ec8f5abe4a5a", 00:08:33.665 "is_configured": true, 00:08:33.665 "data_offset": 2048, 00:08:33.665 "data_size": 63488 00:08:33.665 } 00:08:33.665 ] 00:08:33.665 } 00:08:33.665 } 00:08:33.665 }' 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:33.665 BaseBdev2 00:08:33.665 BaseBdev3' 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.665 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.925 [2024-12-07 02:41:44.828443] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:33.925 [2024-12-07 02:41:44.828513] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.925 [2024-12-07 02:41:44.828638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.925 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.925 "name": "Existed_Raid", 00:08:33.925 "uuid": "9fef52f3-0e98-4026-9820-f2cfd2bb3298", 00:08:33.925 "strip_size_kb": 64, 00:08:33.925 "state": "offline", 00:08:33.925 "raid_level": "raid0", 00:08:33.926 "superblock": true, 00:08:33.926 "num_base_bdevs": 3, 00:08:33.926 "num_base_bdevs_discovered": 2, 00:08:33.926 "num_base_bdevs_operational": 2, 00:08:33.926 "base_bdevs_list": [ 00:08:33.926 { 00:08:33.926 "name": null, 00:08:33.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.926 "is_configured": false, 00:08:33.926 "data_offset": 0, 00:08:33.926 "data_size": 63488 00:08:33.926 }, 00:08:33.926 { 00:08:33.926 "name": "BaseBdev2", 00:08:33.926 "uuid": "21249cc6-5bb9-40ec-86f4-b967607043cf", 00:08:33.926 "is_configured": true, 00:08:33.926 "data_offset": 2048, 00:08:33.926 "data_size": 63488 00:08:33.926 }, 00:08:33.926 { 00:08:33.926 "name": "BaseBdev3", 00:08:33.926 "uuid": "bfb9a019-8a13-4efd-95b8-ec8f5abe4a5a", 00:08:33.926 "is_configured": true, 00:08:33.926 "data_offset": 2048, 00:08:33.926 "data_size": 63488 00:08:33.926 } 00:08:33.926 ] 00:08:33.926 }' 00:08:33.926 02:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.926 02:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.498 [2024-12-07 02:41:45.372523] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.498 [2024-12-07 02:41:45.453038] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:34.498 [2024-12-07 02:41:45.453094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.498 BaseBdev2 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.498 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.757 [ 00:08:34.757 { 00:08:34.757 "name": "BaseBdev2", 00:08:34.757 "aliases": [ 00:08:34.757 "49e3f52e-c6ad-4f38-a266-7fbc12b4adb4" 00:08:34.757 ], 00:08:34.757 "product_name": "Malloc disk", 00:08:34.757 "block_size": 512, 00:08:34.757 "num_blocks": 65536, 00:08:34.757 "uuid": "49e3f52e-c6ad-4f38-a266-7fbc12b4adb4", 00:08:34.757 "assigned_rate_limits": { 00:08:34.757 "rw_ios_per_sec": 0, 00:08:34.757 "rw_mbytes_per_sec": 0, 00:08:34.757 "r_mbytes_per_sec": 0, 00:08:34.757 "w_mbytes_per_sec": 0 00:08:34.757 }, 00:08:34.757 "claimed": false, 00:08:34.757 "zoned": false, 00:08:34.757 "supported_io_types": { 00:08:34.757 "read": true, 00:08:34.757 "write": true, 00:08:34.757 "unmap": true, 00:08:34.757 "flush": true, 00:08:34.757 "reset": true, 00:08:34.757 "nvme_admin": false, 00:08:34.757 "nvme_io": false, 00:08:34.757 "nvme_io_md": false, 00:08:34.757 "write_zeroes": true, 00:08:34.757 "zcopy": true, 00:08:34.757 "get_zone_info": false, 00:08:34.757 "zone_management": false, 00:08:34.757 "zone_append": false, 00:08:34.757 "compare": false, 00:08:34.757 "compare_and_write": false, 00:08:34.757 "abort": true, 00:08:34.757 "seek_hole": false, 00:08:34.757 "seek_data": false, 00:08:34.757 "copy": true, 00:08:34.757 "nvme_iov_md": false 00:08:34.757 }, 00:08:34.757 "memory_domains": [ 00:08:34.757 { 00:08:34.757 "dma_device_id": "system", 00:08:34.757 "dma_device_type": 1 00:08:34.757 }, 00:08:34.757 { 00:08:34.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.757 "dma_device_type": 2 00:08:34.757 } 00:08:34.757 ], 00:08:34.757 "driver_specific": {} 00:08:34.757 } 00:08:34.757 ] 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.757 BaseBdev3 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.757 [ 00:08:34.757 { 00:08:34.757 "name": "BaseBdev3", 00:08:34.757 "aliases": [ 00:08:34.757 "2568dc36-b63d-4ea7-8322-7f596139295a" 00:08:34.757 ], 00:08:34.757 "product_name": "Malloc disk", 00:08:34.757 "block_size": 512, 00:08:34.757 "num_blocks": 65536, 00:08:34.757 "uuid": "2568dc36-b63d-4ea7-8322-7f596139295a", 00:08:34.757 "assigned_rate_limits": { 00:08:34.757 "rw_ios_per_sec": 0, 00:08:34.757 "rw_mbytes_per_sec": 0, 00:08:34.757 "r_mbytes_per_sec": 0, 00:08:34.757 "w_mbytes_per_sec": 0 00:08:34.757 }, 00:08:34.757 "claimed": false, 00:08:34.757 "zoned": false, 00:08:34.757 "supported_io_types": { 00:08:34.757 "read": true, 00:08:34.757 "write": true, 00:08:34.757 "unmap": true, 00:08:34.757 "flush": true, 00:08:34.757 "reset": true, 00:08:34.757 "nvme_admin": false, 00:08:34.757 "nvme_io": false, 00:08:34.757 "nvme_io_md": false, 00:08:34.757 "write_zeroes": true, 00:08:34.757 "zcopy": true, 00:08:34.757 "get_zone_info": false, 00:08:34.757 "zone_management": false, 00:08:34.757 "zone_append": false, 00:08:34.757 "compare": false, 00:08:34.757 "compare_and_write": false, 00:08:34.757 "abort": true, 00:08:34.757 "seek_hole": false, 00:08:34.757 "seek_data": false, 00:08:34.757 "copy": true, 00:08:34.757 "nvme_iov_md": false 00:08:34.757 }, 00:08:34.757 "memory_domains": [ 00:08:34.757 { 00:08:34.757 "dma_device_id": "system", 00:08:34.757 "dma_device_type": 1 00:08:34.757 }, 00:08:34.757 { 00:08:34.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.757 "dma_device_type": 2 00:08:34.757 } 00:08:34.757 ], 00:08:34.757 "driver_specific": {} 00:08:34.757 } 00:08:34.757 ] 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.757 [2024-12-07 02:41:45.654902] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.757 [2024-12-07 02:41:45.655014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.757 [2024-12-07 02:41:45.655056] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.757 [2024-12-07 02:41:45.657162] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:34.757 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.758 "name": "Existed_Raid", 00:08:34.758 "uuid": "76d7cc6e-2584-4db4-9a91-060383ab9e2a", 00:08:34.758 "strip_size_kb": 64, 00:08:34.758 "state": "configuring", 00:08:34.758 "raid_level": "raid0", 00:08:34.758 "superblock": true, 00:08:34.758 "num_base_bdevs": 3, 00:08:34.758 "num_base_bdevs_discovered": 2, 00:08:34.758 "num_base_bdevs_operational": 3, 00:08:34.758 "base_bdevs_list": [ 00:08:34.758 { 00:08:34.758 "name": "BaseBdev1", 00:08:34.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.758 "is_configured": false, 00:08:34.758 "data_offset": 0, 00:08:34.758 "data_size": 0 00:08:34.758 }, 00:08:34.758 { 00:08:34.758 "name": "BaseBdev2", 00:08:34.758 "uuid": "49e3f52e-c6ad-4f38-a266-7fbc12b4adb4", 00:08:34.758 "is_configured": true, 00:08:34.758 "data_offset": 2048, 00:08:34.758 "data_size": 63488 00:08:34.758 }, 00:08:34.758 { 00:08:34.758 "name": "BaseBdev3", 00:08:34.758 "uuid": "2568dc36-b63d-4ea7-8322-7f596139295a", 00:08:34.758 "is_configured": true, 00:08:34.758 "data_offset": 2048, 00:08:34.758 "data_size": 63488 00:08:34.758 } 00:08:34.758 ] 00:08:34.758 }' 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.758 02:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.017 [2024-12-07 02:41:46.082183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.017 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.276 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.276 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.276 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.276 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.276 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.276 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.276 "name": "Existed_Raid", 00:08:35.276 "uuid": "76d7cc6e-2584-4db4-9a91-060383ab9e2a", 00:08:35.276 "strip_size_kb": 64, 00:08:35.276 "state": "configuring", 00:08:35.276 "raid_level": "raid0", 00:08:35.276 "superblock": true, 00:08:35.276 "num_base_bdevs": 3, 00:08:35.276 "num_base_bdevs_discovered": 1, 00:08:35.276 "num_base_bdevs_operational": 3, 00:08:35.276 "base_bdevs_list": [ 00:08:35.276 { 00:08:35.276 "name": "BaseBdev1", 00:08:35.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.276 "is_configured": false, 00:08:35.276 "data_offset": 0, 00:08:35.276 "data_size": 0 00:08:35.276 }, 00:08:35.276 { 00:08:35.276 "name": null, 00:08:35.276 "uuid": "49e3f52e-c6ad-4f38-a266-7fbc12b4adb4", 00:08:35.276 "is_configured": false, 00:08:35.276 "data_offset": 0, 00:08:35.276 "data_size": 63488 00:08:35.276 }, 00:08:35.276 { 00:08:35.276 "name": "BaseBdev3", 00:08:35.276 "uuid": "2568dc36-b63d-4ea7-8322-7f596139295a", 00:08:35.276 "is_configured": true, 00:08:35.276 "data_offset": 2048, 00:08:35.276 "data_size": 63488 00:08:35.276 } 00:08:35.276 ] 00:08:35.276 }' 00:08:35.276 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.276 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.537 [2024-12-07 02:41:46.562104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.537 BaseBdev1 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.537 [ 00:08:35.537 { 00:08:35.537 "name": "BaseBdev1", 00:08:35.537 "aliases": [ 00:08:35.537 "f06b9b94-b8b5-474e-b4fb-be18325d11c4" 00:08:35.537 ], 00:08:35.537 "product_name": "Malloc disk", 00:08:35.537 "block_size": 512, 00:08:35.537 "num_blocks": 65536, 00:08:35.537 "uuid": "f06b9b94-b8b5-474e-b4fb-be18325d11c4", 00:08:35.537 "assigned_rate_limits": { 00:08:35.537 "rw_ios_per_sec": 0, 00:08:35.537 "rw_mbytes_per_sec": 0, 00:08:35.537 "r_mbytes_per_sec": 0, 00:08:35.537 "w_mbytes_per_sec": 0 00:08:35.537 }, 00:08:35.537 "claimed": true, 00:08:35.537 "claim_type": "exclusive_write", 00:08:35.537 "zoned": false, 00:08:35.537 "supported_io_types": { 00:08:35.537 "read": true, 00:08:35.537 "write": true, 00:08:35.537 "unmap": true, 00:08:35.537 "flush": true, 00:08:35.537 "reset": true, 00:08:35.537 "nvme_admin": false, 00:08:35.537 "nvme_io": false, 00:08:35.537 "nvme_io_md": false, 00:08:35.537 "write_zeroes": true, 00:08:35.537 "zcopy": true, 00:08:35.537 "get_zone_info": false, 00:08:35.537 "zone_management": false, 00:08:35.537 "zone_append": false, 00:08:35.537 "compare": false, 00:08:35.537 "compare_and_write": false, 00:08:35.537 "abort": true, 00:08:35.537 "seek_hole": false, 00:08:35.537 "seek_data": false, 00:08:35.537 "copy": true, 00:08:35.537 "nvme_iov_md": false 00:08:35.537 }, 00:08:35.537 "memory_domains": [ 00:08:35.537 { 00:08:35.537 "dma_device_id": "system", 00:08:35.537 "dma_device_type": 1 00:08:35.537 }, 00:08:35.537 { 00:08:35.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.537 "dma_device_type": 2 00:08:35.537 } 00:08:35.537 ], 00:08:35.537 "driver_specific": {} 00:08:35.537 } 00:08:35.537 ] 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.537 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.796 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.796 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.796 "name": "Existed_Raid", 00:08:35.796 "uuid": "76d7cc6e-2584-4db4-9a91-060383ab9e2a", 00:08:35.796 "strip_size_kb": 64, 00:08:35.796 "state": "configuring", 00:08:35.796 "raid_level": "raid0", 00:08:35.796 "superblock": true, 00:08:35.796 "num_base_bdevs": 3, 00:08:35.796 "num_base_bdevs_discovered": 2, 00:08:35.796 "num_base_bdevs_operational": 3, 00:08:35.796 "base_bdevs_list": [ 00:08:35.796 { 00:08:35.796 "name": "BaseBdev1", 00:08:35.796 "uuid": "f06b9b94-b8b5-474e-b4fb-be18325d11c4", 00:08:35.796 "is_configured": true, 00:08:35.796 "data_offset": 2048, 00:08:35.796 "data_size": 63488 00:08:35.796 }, 00:08:35.796 { 00:08:35.796 "name": null, 00:08:35.796 "uuid": "49e3f52e-c6ad-4f38-a266-7fbc12b4adb4", 00:08:35.796 "is_configured": false, 00:08:35.796 "data_offset": 0, 00:08:35.796 "data_size": 63488 00:08:35.796 }, 00:08:35.796 { 00:08:35.796 "name": "BaseBdev3", 00:08:35.796 "uuid": "2568dc36-b63d-4ea7-8322-7f596139295a", 00:08:35.796 "is_configured": true, 00:08:35.796 "data_offset": 2048, 00:08:35.796 "data_size": 63488 00:08:35.796 } 00:08:35.796 ] 00:08:35.796 }' 00:08:35.796 02:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.796 02:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.055 [2024-12-07 02:41:47.121186] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.055 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.315 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.315 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.315 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.315 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.315 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.315 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.315 "name": "Existed_Raid", 00:08:36.315 "uuid": "76d7cc6e-2584-4db4-9a91-060383ab9e2a", 00:08:36.315 "strip_size_kb": 64, 00:08:36.315 "state": "configuring", 00:08:36.315 "raid_level": "raid0", 00:08:36.315 "superblock": true, 00:08:36.315 "num_base_bdevs": 3, 00:08:36.315 "num_base_bdevs_discovered": 1, 00:08:36.315 "num_base_bdevs_operational": 3, 00:08:36.315 "base_bdevs_list": [ 00:08:36.315 { 00:08:36.315 "name": "BaseBdev1", 00:08:36.315 "uuid": "f06b9b94-b8b5-474e-b4fb-be18325d11c4", 00:08:36.315 "is_configured": true, 00:08:36.315 "data_offset": 2048, 00:08:36.315 "data_size": 63488 00:08:36.315 }, 00:08:36.315 { 00:08:36.315 "name": null, 00:08:36.315 "uuid": "49e3f52e-c6ad-4f38-a266-7fbc12b4adb4", 00:08:36.315 "is_configured": false, 00:08:36.315 "data_offset": 0, 00:08:36.315 "data_size": 63488 00:08:36.315 }, 00:08:36.315 { 00:08:36.315 "name": null, 00:08:36.315 "uuid": "2568dc36-b63d-4ea7-8322-7f596139295a", 00:08:36.315 "is_configured": false, 00:08:36.315 "data_offset": 0, 00:08:36.315 "data_size": 63488 00:08:36.315 } 00:08:36.315 ] 00:08:36.315 }' 00:08:36.315 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.315 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.574 [2024-12-07 02:41:47.556475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.574 "name": "Existed_Raid", 00:08:36.574 "uuid": "76d7cc6e-2584-4db4-9a91-060383ab9e2a", 00:08:36.574 "strip_size_kb": 64, 00:08:36.574 "state": "configuring", 00:08:36.574 "raid_level": "raid0", 00:08:36.574 "superblock": true, 00:08:36.574 "num_base_bdevs": 3, 00:08:36.574 "num_base_bdevs_discovered": 2, 00:08:36.574 "num_base_bdevs_operational": 3, 00:08:36.574 "base_bdevs_list": [ 00:08:36.574 { 00:08:36.574 "name": "BaseBdev1", 00:08:36.574 "uuid": "f06b9b94-b8b5-474e-b4fb-be18325d11c4", 00:08:36.574 "is_configured": true, 00:08:36.574 "data_offset": 2048, 00:08:36.574 "data_size": 63488 00:08:36.574 }, 00:08:36.574 { 00:08:36.574 "name": null, 00:08:36.574 "uuid": "49e3f52e-c6ad-4f38-a266-7fbc12b4adb4", 00:08:36.574 "is_configured": false, 00:08:36.574 "data_offset": 0, 00:08:36.574 "data_size": 63488 00:08:36.574 }, 00:08:36.574 { 00:08:36.574 "name": "BaseBdev3", 00:08:36.574 "uuid": "2568dc36-b63d-4ea7-8322-7f596139295a", 00:08:36.574 "is_configured": true, 00:08:36.574 "data_offset": 2048, 00:08:36.574 "data_size": 63488 00:08:36.574 } 00:08:36.574 ] 00:08:36.574 }' 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.574 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.143 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.143 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.143 02:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:37.143 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.143 02:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.143 [2024-12-07 02:41:48.031755] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.143 "name": "Existed_Raid", 00:08:37.143 "uuid": "76d7cc6e-2584-4db4-9a91-060383ab9e2a", 00:08:37.143 "strip_size_kb": 64, 00:08:37.143 "state": "configuring", 00:08:37.143 "raid_level": "raid0", 00:08:37.143 "superblock": true, 00:08:37.143 "num_base_bdevs": 3, 00:08:37.143 "num_base_bdevs_discovered": 1, 00:08:37.143 "num_base_bdevs_operational": 3, 00:08:37.143 "base_bdevs_list": [ 00:08:37.143 { 00:08:37.143 "name": null, 00:08:37.143 "uuid": "f06b9b94-b8b5-474e-b4fb-be18325d11c4", 00:08:37.143 "is_configured": false, 00:08:37.143 "data_offset": 0, 00:08:37.143 "data_size": 63488 00:08:37.143 }, 00:08:37.143 { 00:08:37.143 "name": null, 00:08:37.143 "uuid": "49e3f52e-c6ad-4f38-a266-7fbc12b4adb4", 00:08:37.143 "is_configured": false, 00:08:37.143 "data_offset": 0, 00:08:37.143 "data_size": 63488 00:08:37.143 }, 00:08:37.143 { 00:08:37.143 "name": "BaseBdev3", 00:08:37.143 "uuid": "2568dc36-b63d-4ea7-8322-7f596139295a", 00:08:37.143 "is_configured": true, 00:08:37.143 "data_offset": 2048, 00:08:37.143 "data_size": 63488 00:08:37.143 } 00:08:37.143 ] 00:08:37.143 }' 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.143 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.713 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:37.713 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.713 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.713 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.713 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.713 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:37.713 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:37.713 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.713 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.714 [2024-12-07 02:41:48.522743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.714 "name": "Existed_Raid", 00:08:37.714 "uuid": "76d7cc6e-2584-4db4-9a91-060383ab9e2a", 00:08:37.714 "strip_size_kb": 64, 00:08:37.714 "state": "configuring", 00:08:37.714 "raid_level": "raid0", 00:08:37.714 "superblock": true, 00:08:37.714 "num_base_bdevs": 3, 00:08:37.714 "num_base_bdevs_discovered": 2, 00:08:37.714 "num_base_bdevs_operational": 3, 00:08:37.714 "base_bdevs_list": [ 00:08:37.714 { 00:08:37.714 "name": null, 00:08:37.714 "uuid": "f06b9b94-b8b5-474e-b4fb-be18325d11c4", 00:08:37.714 "is_configured": false, 00:08:37.714 "data_offset": 0, 00:08:37.714 "data_size": 63488 00:08:37.714 }, 00:08:37.714 { 00:08:37.714 "name": "BaseBdev2", 00:08:37.714 "uuid": "49e3f52e-c6ad-4f38-a266-7fbc12b4adb4", 00:08:37.714 "is_configured": true, 00:08:37.714 "data_offset": 2048, 00:08:37.714 "data_size": 63488 00:08:37.714 }, 00:08:37.714 { 00:08:37.714 "name": "BaseBdev3", 00:08:37.714 "uuid": "2568dc36-b63d-4ea7-8322-7f596139295a", 00:08:37.714 "is_configured": true, 00:08:37.714 "data_offset": 2048, 00:08:37.714 "data_size": 63488 00:08:37.714 } 00:08:37.714 ] 00:08:37.714 }' 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.714 02:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.973 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:37.973 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.974 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.974 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.974 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.233 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:38.233 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:38.233 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f06b9b94-b8b5-474e-b4fb-be18325d11c4 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.234 [2024-12-07 02:41:49.110427] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:38.234 [2024-12-07 02:41:49.110634] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:38.234 [2024-12-07 02:41:49.110653] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:38.234 [2024-12-07 02:41:49.110929] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:38.234 [2024-12-07 02:41:49.111077] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:38.234 [2024-12-07 02:41:49.111086] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:38.234 [2024-12-07 02:41:49.111202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.234 NewBaseBdev 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.234 [ 00:08:38.234 { 00:08:38.234 "name": "NewBaseBdev", 00:08:38.234 "aliases": [ 00:08:38.234 "f06b9b94-b8b5-474e-b4fb-be18325d11c4" 00:08:38.234 ], 00:08:38.234 "product_name": "Malloc disk", 00:08:38.234 "block_size": 512, 00:08:38.234 "num_blocks": 65536, 00:08:38.234 "uuid": "f06b9b94-b8b5-474e-b4fb-be18325d11c4", 00:08:38.234 "assigned_rate_limits": { 00:08:38.234 "rw_ios_per_sec": 0, 00:08:38.234 "rw_mbytes_per_sec": 0, 00:08:38.234 "r_mbytes_per_sec": 0, 00:08:38.234 "w_mbytes_per_sec": 0 00:08:38.234 }, 00:08:38.234 "claimed": true, 00:08:38.234 "claim_type": "exclusive_write", 00:08:38.234 "zoned": false, 00:08:38.234 "supported_io_types": { 00:08:38.234 "read": true, 00:08:38.234 "write": true, 00:08:38.234 "unmap": true, 00:08:38.234 "flush": true, 00:08:38.234 "reset": true, 00:08:38.234 "nvme_admin": false, 00:08:38.234 "nvme_io": false, 00:08:38.234 "nvme_io_md": false, 00:08:38.234 "write_zeroes": true, 00:08:38.234 "zcopy": true, 00:08:38.234 "get_zone_info": false, 00:08:38.234 "zone_management": false, 00:08:38.234 "zone_append": false, 00:08:38.234 "compare": false, 00:08:38.234 "compare_and_write": false, 00:08:38.234 "abort": true, 00:08:38.234 "seek_hole": false, 00:08:38.234 "seek_data": false, 00:08:38.234 "copy": true, 00:08:38.234 "nvme_iov_md": false 00:08:38.234 }, 00:08:38.234 "memory_domains": [ 00:08:38.234 { 00:08:38.234 "dma_device_id": "system", 00:08:38.234 "dma_device_type": 1 00:08:38.234 }, 00:08:38.234 { 00:08:38.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.234 "dma_device_type": 2 00:08:38.234 } 00:08:38.234 ], 00:08:38.234 "driver_specific": {} 00:08:38.234 } 00:08:38.234 ] 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.234 "name": "Existed_Raid", 00:08:38.234 "uuid": "76d7cc6e-2584-4db4-9a91-060383ab9e2a", 00:08:38.234 "strip_size_kb": 64, 00:08:38.234 "state": "online", 00:08:38.234 "raid_level": "raid0", 00:08:38.234 "superblock": true, 00:08:38.234 "num_base_bdevs": 3, 00:08:38.234 "num_base_bdevs_discovered": 3, 00:08:38.234 "num_base_bdevs_operational": 3, 00:08:38.234 "base_bdevs_list": [ 00:08:38.234 { 00:08:38.234 "name": "NewBaseBdev", 00:08:38.234 "uuid": "f06b9b94-b8b5-474e-b4fb-be18325d11c4", 00:08:38.234 "is_configured": true, 00:08:38.234 "data_offset": 2048, 00:08:38.234 "data_size": 63488 00:08:38.234 }, 00:08:38.234 { 00:08:38.234 "name": "BaseBdev2", 00:08:38.234 "uuid": "49e3f52e-c6ad-4f38-a266-7fbc12b4adb4", 00:08:38.234 "is_configured": true, 00:08:38.234 "data_offset": 2048, 00:08:38.234 "data_size": 63488 00:08:38.234 }, 00:08:38.234 { 00:08:38.234 "name": "BaseBdev3", 00:08:38.234 "uuid": "2568dc36-b63d-4ea7-8322-7f596139295a", 00:08:38.234 "is_configured": true, 00:08:38.234 "data_offset": 2048, 00:08:38.234 "data_size": 63488 00:08:38.234 } 00:08:38.234 ] 00:08:38.234 }' 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.234 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.803 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:38.803 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:38.803 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:38.803 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:38.803 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:38.803 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:38.803 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:38.803 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:38.803 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.803 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.803 [2024-12-07 02:41:49.593932] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.803 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.803 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:38.803 "name": "Existed_Raid", 00:08:38.803 "aliases": [ 00:08:38.803 "76d7cc6e-2584-4db4-9a91-060383ab9e2a" 00:08:38.803 ], 00:08:38.803 "product_name": "Raid Volume", 00:08:38.803 "block_size": 512, 00:08:38.803 "num_blocks": 190464, 00:08:38.803 "uuid": "76d7cc6e-2584-4db4-9a91-060383ab9e2a", 00:08:38.803 "assigned_rate_limits": { 00:08:38.803 "rw_ios_per_sec": 0, 00:08:38.803 "rw_mbytes_per_sec": 0, 00:08:38.803 "r_mbytes_per_sec": 0, 00:08:38.803 "w_mbytes_per_sec": 0 00:08:38.803 }, 00:08:38.803 "claimed": false, 00:08:38.803 "zoned": false, 00:08:38.803 "supported_io_types": { 00:08:38.803 "read": true, 00:08:38.803 "write": true, 00:08:38.803 "unmap": true, 00:08:38.803 "flush": true, 00:08:38.804 "reset": true, 00:08:38.804 "nvme_admin": false, 00:08:38.804 "nvme_io": false, 00:08:38.804 "nvme_io_md": false, 00:08:38.804 "write_zeroes": true, 00:08:38.804 "zcopy": false, 00:08:38.804 "get_zone_info": false, 00:08:38.804 "zone_management": false, 00:08:38.804 "zone_append": false, 00:08:38.804 "compare": false, 00:08:38.804 "compare_and_write": false, 00:08:38.804 "abort": false, 00:08:38.804 "seek_hole": false, 00:08:38.804 "seek_data": false, 00:08:38.804 "copy": false, 00:08:38.804 "nvme_iov_md": false 00:08:38.804 }, 00:08:38.804 "memory_domains": [ 00:08:38.804 { 00:08:38.804 "dma_device_id": "system", 00:08:38.804 "dma_device_type": 1 00:08:38.804 }, 00:08:38.804 { 00:08:38.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.804 "dma_device_type": 2 00:08:38.804 }, 00:08:38.804 { 00:08:38.804 "dma_device_id": "system", 00:08:38.804 "dma_device_type": 1 00:08:38.804 }, 00:08:38.804 { 00:08:38.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.804 "dma_device_type": 2 00:08:38.804 }, 00:08:38.804 { 00:08:38.804 "dma_device_id": "system", 00:08:38.804 "dma_device_type": 1 00:08:38.804 }, 00:08:38.804 { 00:08:38.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.804 "dma_device_type": 2 00:08:38.804 } 00:08:38.804 ], 00:08:38.804 "driver_specific": { 00:08:38.804 "raid": { 00:08:38.804 "uuid": "76d7cc6e-2584-4db4-9a91-060383ab9e2a", 00:08:38.804 "strip_size_kb": 64, 00:08:38.804 "state": "online", 00:08:38.804 "raid_level": "raid0", 00:08:38.804 "superblock": true, 00:08:38.804 "num_base_bdevs": 3, 00:08:38.804 "num_base_bdevs_discovered": 3, 00:08:38.804 "num_base_bdevs_operational": 3, 00:08:38.804 "base_bdevs_list": [ 00:08:38.804 { 00:08:38.804 "name": "NewBaseBdev", 00:08:38.804 "uuid": "f06b9b94-b8b5-474e-b4fb-be18325d11c4", 00:08:38.804 "is_configured": true, 00:08:38.804 "data_offset": 2048, 00:08:38.804 "data_size": 63488 00:08:38.804 }, 00:08:38.804 { 00:08:38.804 "name": "BaseBdev2", 00:08:38.804 "uuid": "49e3f52e-c6ad-4f38-a266-7fbc12b4adb4", 00:08:38.804 "is_configured": true, 00:08:38.804 "data_offset": 2048, 00:08:38.804 "data_size": 63488 00:08:38.804 }, 00:08:38.804 { 00:08:38.804 "name": "BaseBdev3", 00:08:38.804 "uuid": "2568dc36-b63d-4ea7-8322-7f596139295a", 00:08:38.804 "is_configured": true, 00:08:38.804 "data_offset": 2048, 00:08:38.804 "data_size": 63488 00:08:38.804 } 00:08:38.804 ] 00:08:38.804 } 00:08:38.804 } 00:08:38.804 }' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:38.804 BaseBdev2 00:08:38.804 BaseBdev3' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.804 [2024-12-07 02:41:49.813253] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:38.804 [2024-12-07 02:41:49.813322] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.804 [2024-12-07 02:41:49.813408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.804 [2024-12-07 02:41:49.813463] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.804 [2024-12-07 02:41:49.813477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75835 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75835 ']' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75835 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75835 00:08:38.804 killing process with pid 75835 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75835' 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75835 00:08:38.804 [2024-12-07 02:41:49.852905] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.804 02:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75835 00:08:39.064 [2024-12-07 02:41:49.913208] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.324 02:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:39.324 00:08:39.324 real 0m9.069s 00:08:39.324 user 0m15.102s 00:08:39.324 sys 0m1.956s 00:08:39.324 ************************************ 00:08:39.324 END TEST raid_state_function_test_sb 00:08:39.324 ************************************ 00:08:39.324 02:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.324 02:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.324 02:41:50 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:39.324 02:41:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:39.324 02:41:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.324 02:41:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.325 ************************************ 00:08:39.325 START TEST raid_superblock_test 00:08:39.325 ************************************ 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76444 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76444 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 76444 ']' 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.325 02:41:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.584 [2024-12-07 02:41:50.459539] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:39.584 [2024-12-07 02:41:50.459707] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76444 ] 00:08:39.584 [2024-12-07 02:41:50.620269] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.843 [2024-12-07 02:41:50.690604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.843 [2024-12-07 02:41:50.766587] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.843 [2024-12-07 02:41:50.766709] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.411 malloc1 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.411 [2024-12-07 02:41:51.316813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:40.411 [2024-12-07 02:41:51.316897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.411 [2024-12-07 02:41:51.316920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:40.411 [2024-12-07 02:41:51.316936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.411 [2024-12-07 02:41:51.319396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.411 [2024-12-07 02:41:51.319436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:40.411 pt1 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.411 malloc2 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.411 [2024-12-07 02:41:51.359879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:40.411 [2024-12-07 02:41:51.360002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.411 [2024-12-07 02:41:51.360036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:40.411 [2024-12-07 02:41:51.360068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.411 [2024-12-07 02:41:51.362536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.411 [2024-12-07 02:41:51.362617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:40.411 pt2 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.411 malloc3 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.411 [2024-12-07 02:41:51.398568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:40.411 [2024-12-07 02:41:51.398670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.411 [2024-12-07 02:41:51.398704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:40.411 [2024-12-07 02:41:51.398733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.411 [2024-12-07 02:41:51.401216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.411 [2024-12-07 02:41:51.401285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:40.411 pt3 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:40.411 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.412 [2024-12-07 02:41:51.410605] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:40.412 [2024-12-07 02:41:51.412905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:40.412 [2024-12-07 02:41:51.413005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:40.412 [2024-12-07 02:41:51.413172] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:08:40.412 [2024-12-07 02:41:51.413230] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:40.412 [2024-12-07 02:41:51.413513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:40.412 [2024-12-07 02:41:51.413709] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:08:40.412 [2024-12-07 02:41:51.413756] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:08:40.412 [2024-12-07 02:41:51.413928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.412 "name": "raid_bdev1", 00:08:40.412 "uuid": "cdb632a8-d659-4491-afaa-a4a09b86909b", 00:08:40.412 "strip_size_kb": 64, 00:08:40.412 "state": "online", 00:08:40.412 "raid_level": "raid0", 00:08:40.412 "superblock": true, 00:08:40.412 "num_base_bdevs": 3, 00:08:40.412 "num_base_bdevs_discovered": 3, 00:08:40.412 "num_base_bdevs_operational": 3, 00:08:40.412 "base_bdevs_list": [ 00:08:40.412 { 00:08:40.412 "name": "pt1", 00:08:40.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.412 "is_configured": true, 00:08:40.412 "data_offset": 2048, 00:08:40.412 "data_size": 63488 00:08:40.412 }, 00:08:40.412 { 00:08:40.412 "name": "pt2", 00:08:40.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.412 "is_configured": true, 00:08:40.412 "data_offset": 2048, 00:08:40.412 "data_size": 63488 00:08:40.412 }, 00:08:40.412 { 00:08:40.412 "name": "pt3", 00:08:40.412 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:40.412 "is_configured": true, 00:08:40.412 "data_offset": 2048, 00:08:40.412 "data_size": 63488 00:08:40.412 } 00:08:40.412 ] 00:08:40.412 }' 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.412 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.981 [2024-12-07 02:41:51.882039] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.981 "name": "raid_bdev1", 00:08:40.981 "aliases": [ 00:08:40.981 "cdb632a8-d659-4491-afaa-a4a09b86909b" 00:08:40.981 ], 00:08:40.981 "product_name": "Raid Volume", 00:08:40.981 "block_size": 512, 00:08:40.981 "num_blocks": 190464, 00:08:40.981 "uuid": "cdb632a8-d659-4491-afaa-a4a09b86909b", 00:08:40.981 "assigned_rate_limits": { 00:08:40.981 "rw_ios_per_sec": 0, 00:08:40.981 "rw_mbytes_per_sec": 0, 00:08:40.981 "r_mbytes_per_sec": 0, 00:08:40.981 "w_mbytes_per_sec": 0 00:08:40.981 }, 00:08:40.981 "claimed": false, 00:08:40.981 "zoned": false, 00:08:40.981 "supported_io_types": { 00:08:40.981 "read": true, 00:08:40.981 "write": true, 00:08:40.981 "unmap": true, 00:08:40.981 "flush": true, 00:08:40.981 "reset": true, 00:08:40.981 "nvme_admin": false, 00:08:40.981 "nvme_io": false, 00:08:40.981 "nvme_io_md": false, 00:08:40.981 "write_zeroes": true, 00:08:40.981 "zcopy": false, 00:08:40.981 "get_zone_info": false, 00:08:40.981 "zone_management": false, 00:08:40.981 "zone_append": false, 00:08:40.981 "compare": false, 00:08:40.981 "compare_and_write": false, 00:08:40.981 "abort": false, 00:08:40.981 "seek_hole": false, 00:08:40.981 "seek_data": false, 00:08:40.981 "copy": false, 00:08:40.981 "nvme_iov_md": false 00:08:40.981 }, 00:08:40.981 "memory_domains": [ 00:08:40.981 { 00:08:40.981 "dma_device_id": "system", 00:08:40.981 "dma_device_type": 1 00:08:40.981 }, 00:08:40.981 { 00:08:40.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.981 "dma_device_type": 2 00:08:40.981 }, 00:08:40.981 { 00:08:40.981 "dma_device_id": "system", 00:08:40.981 "dma_device_type": 1 00:08:40.981 }, 00:08:40.981 { 00:08:40.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.981 "dma_device_type": 2 00:08:40.981 }, 00:08:40.981 { 00:08:40.981 "dma_device_id": "system", 00:08:40.981 "dma_device_type": 1 00:08:40.981 }, 00:08:40.981 { 00:08:40.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.981 "dma_device_type": 2 00:08:40.981 } 00:08:40.981 ], 00:08:40.981 "driver_specific": { 00:08:40.981 "raid": { 00:08:40.981 "uuid": "cdb632a8-d659-4491-afaa-a4a09b86909b", 00:08:40.981 "strip_size_kb": 64, 00:08:40.981 "state": "online", 00:08:40.981 "raid_level": "raid0", 00:08:40.981 "superblock": true, 00:08:40.981 "num_base_bdevs": 3, 00:08:40.981 "num_base_bdevs_discovered": 3, 00:08:40.981 "num_base_bdevs_operational": 3, 00:08:40.981 "base_bdevs_list": [ 00:08:40.981 { 00:08:40.981 "name": "pt1", 00:08:40.981 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.981 "is_configured": true, 00:08:40.981 "data_offset": 2048, 00:08:40.981 "data_size": 63488 00:08:40.981 }, 00:08:40.981 { 00:08:40.981 "name": "pt2", 00:08:40.981 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.981 "is_configured": true, 00:08:40.981 "data_offset": 2048, 00:08:40.981 "data_size": 63488 00:08:40.981 }, 00:08:40.981 { 00:08:40.981 "name": "pt3", 00:08:40.981 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:40.981 "is_configured": true, 00:08:40.981 "data_offset": 2048, 00:08:40.981 "data_size": 63488 00:08:40.981 } 00:08:40.981 ] 00:08:40.981 } 00:08:40.981 } 00:08:40.981 }' 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:40.981 pt2 00:08:40.981 pt3' 00:08:40.981 02:41:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.981 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.981 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.981 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:40.981 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.981 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.981 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.981 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.246 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 [2024-12-07 02:41:52.153535] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cdb632a8-d659-4491-afaa-a4a09b86909b 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cdb632a8-d659-4491-afaa-a4a09b86909b ']' 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 [2024-12-07 02:41:52.197190] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.247 [2024-12-07 02:41:52.197219] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.247 [2024-12-07 02:41:52.197304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.247 [2024-12-07 02:41:52.197365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.247 [2024-12-07 02:41:52.197378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:41.247 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.526 [2024-12-07 02:41:52.352969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:41.526 [2024-12-07 02:41:52.355160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:41.526 [2024-12-07 02:41:52.355207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:41.526 [2024-12-07 02:41:52.355258] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:41.526 [2024-12-07 02:41:52.355307] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:41.526 [2024-12-07 02:41:52.355329] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:41.526 [2024-12-07 02:41:52.355343] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.526 [2024-12-07 02:41:52.355354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:08:41.526 request: 00:08:41.526 { 00:08:41.526 "name": "raid_bdev1", 00:08:41.526 "raid_level": "raid0", 00:08:41.526 "base_bdevs": [ 00:08:41.526 "malloc1", 00:08:41.526 "malloc2", 00:08:41.526 "malloc3" 00:08:41.526 ], 00:08:41.526 "strip_size_kb": 64, 00:08:41.526 "superblock": false, 00:08:41.526 "method": "bdev_raid_create", 00:08:41.526 "req_id": 1 00:08:41.526 } 00:08:41.526 Got JSON-RPC error response 00:08:41.526 response: 00:08:41.526 { 00:08:41.526 "code": -17, 00:08:41.526 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:41.526 } 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.526 [2024-12-07 02:41:52.416815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:41.526 [2024-12-07 02:41:52.416904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.526 [2024-12-07 02:41:52.416936] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:41.526 [2024-12-07 02:41:52.416966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.526 [2024-12-07 02:41:52.419439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.526 [2024-12-07 02:41:52.419509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:41.526 [2024-12-07 02:41:52.419612] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:41.526 [2024-12-07 02:41:52.419677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:41.526 pt1 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.526 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.527 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.527 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.527 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.527 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.527 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.527 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.527 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.527 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.527 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.527 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.527 "name": "raid_bdev1", 00:08:41.527 "uuid": "cdb632a8-d659-4491-afaa-a4a09b86909b", 00:08:41.527 "strip_size_kb": 64, 00:08:41.527 "state": "configuring", 00:08:41.527 "raid_level": "raid0", 00:08:41.527 "superblock": true, 00:08:41.527 "num_base_bdevs": 3, 00:08:41.527 "num_base_bdevs_discovered": 1, 00:08:41.527 "num_base_bdevs_operational": 3, 00:08:41.527 "base_bdevs_list": [ 00:08:41.527 { 00:08:41.527 "name": "pt1", 00:08:41.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.527 "is_configured": true, 00:08:41.527 "data_offset": 2048, 00:08:41.527 "data_size": 63488 00:08:41.527 }, 00:08:41.527 { 00:08:41.527 "name": null, 00:08:41.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.527 "is_configured": false, 00:08:41.527 "data_offset": 2048, 00:08:41.527 "data_size": 63488 00:08:41.527 }, 00:08:41.527 { 00:08:41.527 "name": null, 00:08:41.527 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:41.527 "is_configured": false, 00:08:41.527 "data_offset": 2048, 00:08:41.527 "data_size": 63488 00:08:41.527 } 00:08:41.527 ] 00:08:41.527 }' 00:08:41.527 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.527 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.803 [2024-12-07 02:41:52.856084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:41.803 [2024-12-07 02:41:52.856143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.803 [2024-12-07 02:41:52.856164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:41.803 [2024-12-07 02:41:52.856178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.803 [2024-12-07 02:41:52.856602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.803 [2024-12-07 02:41:52.856626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:41.803 [2024-12-07 02:41:52.856693] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:41.803 [2024-12-07 02:41:52.856731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.803 pt2 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.803 [2024-12-07 02:41:52.868080] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.803 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.063 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.063 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.063 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.063 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.063 "name": "raid_bdev1", 00:08:42.063 "uuid": "cdb632a8-d659-4491-afaa-a4a09b86909b", 00:08:42.063 "strip_size_kb": 64, 00:08:42.063 "state": "configuring", 00:08:42.063 "raid_level": "raid0", 00:08:42.063 "superblock": true, 00:08:42.063 "num_base_bdevs": 3, 00:08:42.063 "num_base_bdevs_discovered": 1, 00:08:42.063 "num_base_bdevs_operational": 3, 00:08:42.063 "base_bdevs_list": [ 00:08:42.063 { 00:08:42.063 "name": "pt1", 00:08:42.063 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.063 "is_configured": true, 00:08:42.063 "data_offset": 2048, 00:08:42.063 "data_size": 63488 00:08:42.063 }, 00:08:42.063 { 00:08:42.063 "name": null, 00:08:42.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.063 "is_configured": false, 00:08:42.063 "data_offset": 0, 00:08:42.063 "data_size": 63488 00:08:42.063 }, 00:08:42.063 { 00:08:42.063 "name": null, 00:08:42.063 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.063 "is_configured": false, 00:08:42.063 "data_offset": 2048, 00:08:42.063 "data_size": 63488 00:08:42.063 } 00:08:42.063 ] 00:08:42.063 }' 00:08:42.063 02:41:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.063 02:41:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.322 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:42.322 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:42.322 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:42.322 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.322 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.322 [2024-12-07 02:41:53.319431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:42.322 [2024-12-07 02:41:53.319543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.322 [2024-12-07 02:41:53.319613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:42.322 [2024-12-07 02:41:53.319643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.322 [2024-12-07 02:41:53.320102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.322 [2024-12-07 02:41:53.320158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:42.322 [2024-12-07 02:41:53.320263] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:42.322 [2024-12-07 02:41:53.320310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.322 pt2 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.323 [2024-12-07 02:41:53.331374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:42.323 [2024-12-07 02:41:53.331453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.323 [2024-12-07 02:41:53.331486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:42.323 [2024-12-07 02:41:53.331512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.323 [2024-12-07 02:41:53.331945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.323 [2024-12-07 02:41:53.331998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:42.323 [2024-12-07 02:41:53.332088] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:42.323 [2024-12-07 02:41:53.332132] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:42.323 [2024-12-07 02:41:53.332248] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:42.323 [2024-12-07 02:41:53.332284] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:42.323 [2024-12-07 02:41:53.332550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:42.323 [2024-12-07 02:41:53.332707] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:42.323 [2024-12-07 02:41:53.332725] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:08:42.323 [2024-12-07 02:41:53.332838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.323 pt3 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.323 "name": "raid_bdev1", 00:08:42.323 "uuid": "cdb632a8-d659-4491-afaa-a4a09b86909b", 00:08:42.323 "strip_size_kb": 64, 00:08:42.323 "state": "online", 00:08:42.323 "raid_level": "raid0", 00:08:42.323 "superblock": true, 00:08:42.323 "num_base_bdevs": 3, 00:08:42.323 "num_base_bdevs_discovered": 3, 00:08:42.323 "num_base_bdevs_operational": 3, 00:08:42.323 "base_bdevs_list": [ 00:08:42.323 { 00:08:42.323 "name": "pt1", 00:08:42.323 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.323 "is_configured": true, 00:08:42.323 "data_offset": 2048, 00:08:42.323 "data_size": 63488 00:08:42.323 }, 00:08:42.323 { 00:08:42.323 "name": "pt2", 00:08:42.323 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.323 "is_configured": true, 00:08:42.323 "data_offset": 2048, 00:08:42.323 "data_size": 63488 00:08:42.323 }, 00:08:42.323 { 00:08:42.323 "name": "pt3", 00:08:42.323 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.323 "is_configured": true, 00:08:42.323 "data_offset": 2048, 00:08:42.323 "data_size": 63488 00:08:42.323 } 00:08:42.323 ] 00:08:42.323 }' 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.323 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.891 [2024-12-07 02:41:53.770908] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.891 "name": "raid_bdev1", 00:08:42.891 "aliases": [ 00:08:42.891 "cdb632a8-d659-4491-afaa-a4a09b86909b" 00:08:42.891 ], 00:08:42.891 "product_name": "Raid Volume", 00:08:42.891 "block_size": 512, 00:08:42.891 "num_blocks": 190464, 00:08:42.891 "uuid": "cdb632a8-d659-4491-afaa-a4a09b86909b", 00:08:42.891 "assigned_rate_limits": { 00:08:42.891 "rw_ios_per_sec": 0, 00:08:42.891 "rw_mbytes_per_sec": 0, 00:08:42.891 "r_mbytes_per_sec": 0, 00:08:42.891 "w_mbytes_per_sec": 0 00:08:42.891 }, 00:08:42.891 "claimed": false, 00:08:42.891 "zoned": false, 00:08:42.891 "supported_io_types": { 00:08:42.891 "read": true, 00:08:42.891 "write": true, 00:08:42.891 "unmap": true, 00:08:42.891 "flush": true, 00:08:42.891 "reset": true, 00:08:42.891 "nvme_admin": false, 00:08:42.891 "nvme_io": false, 00:08:42.891 "nvme_io_md": false, 00:08:42.891 "write_zeroes": true, 00:08:42.891 "zcopy": false, 00:08:42.891 "get_zone_info": false, 00:08:42.891 "zone_management": false, 00:08:42.891 "zone_append": false, 00:08:42.891 "compare": false, 00:08:42.891 "compare_and_write": false, 00:08:42.891 "abort": false, 00:08:42.891 "seek_hole": false, 00:08:42.891 "seek_data": false, 00:08:42.891 "copy": false, 00:08:42.891 "nvme_iov_md": false 00:08:42.891 }, 00:08:42.891 "memory_domains": [ 00:08:42.891 { 00:08:42.891 "dma_device_id": "system", 00:08:42.891 "dma_device_type": 1 00:08:42.891 }, 00:08:42.891 { 00:08:42.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.891 "dma_device_type": 2 00:08:42.891 }, 00:08:42.891 { 00:08:42.891 "dma_device_id": "system", 00:08:42.891 "dma_device_type": 1 00:08:42.891 }, 00:08:42.891 { 00:08:42.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.891 "dma_device_type": 2 00:08:42.891 }, 00:08:42.891 { 00:08:42.891 "dma_device_id": "system", 00:08:42.891 "dma_device_type": 1 00:08:42.891 }, 00:08:42.891 { 00:08:42.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.891 "dma_device_type": 2 00:08:42.891 } 00:08:42.891 ], 00:08:42.891 "driver_specific": { 00:08:42.891 "raid": { 00:08:42.891 "uuid": "cdb632a8-d659-4491-afaa-a4a09b86909b", 00:08:42.891 "strip_size_kb": 64, 00:08:42.891 "state": "online", 00:08:42.891 "raid_level": "raid0", 00:08:42.891 "superblock": true, 00:08:42.891 "num_base_bdevs": 3, 00:08:42.891 "num_base_bdevs_discovered": 3, 00:08:42.891 "num_base_bdevs_operational": 3, 00:08:42.891 "base_bdevs_list": [ 00:08:42.891 { 00:08:42.891 "name": "pt1", 00:08:42.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.891 "is_configured": true, 00:08:42.891 "data_offset": 2048, 00:08:42.891 "data_size": 63488 00:08:42.891 }, 00:08:42.891 { 00:08:42.891 "name": "pt2", 00:08:42.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.891 "is_configured": true, 00:08:42.891 "data_offset": 2048, 00:08:42.891 "data_size": 63488 00:08:42.891 }, 00:08:42.891 { 00:08:42.891 "name": "pt3", 00:08:42.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.891 "is_configured": true, 00:08:42.891 "data_offset": 2048, 00:08:42.891 "data_size": 63488 00:08:42.891 } 00:08:42.891 ] 00:08:42.891 } 00:08:42.891 } 00:08:42.891 }' 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:42.891 pt2 00:08:42.891 pt3' 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.891 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.892 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.892 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:42.892 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.892 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.892 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.892 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.892 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.892 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.892 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.892 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.892 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:42.892 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.892 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.152 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.152 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.152 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.152 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.152 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:43.152 02:41:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.152 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.152 02:41:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:43.152 [2024-12-07 02:41:54.038420] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cdb632a8-d659-4491-afaa-a4a09b86909b '!=' cdb632a8-d659-4491-afaa-a4a09b86909b ']' 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76444 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 76444 ']' 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 76444 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76444 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76444' 00:08:43.152 killing process with pid 76444 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 76444 00:08:43.152 [2024-12-07 02:41:54.131499] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:43.152 [2024-12-07 02:41:54.131676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.152 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 76444 00:08:43.152 [2024-12-07 02:41:54.131781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.152 [2024-12-07 02:41:54.131793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:08:43.152 [2024-12-07 02:41:54.192282] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.722 02:41:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:43.722 00:08:43.722 real 0m4.202s 00:08:43.722 user 0m6.395s 00:08:43.722 sys 0m0.991s 00:08:43.722 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.723 02:41:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.723 ************************************ 00:08:43.723 END TEST raid_superblock_test 00:08:43.723 ************************************ 00:08:43.723 02:41:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:43.723 02:41:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:43.723 02:41:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.723 02:41:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.723 ************************************ 00:08:43.723 START TEST raid_read_error_test 00:08:43.723 ************************************ 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.I3p9YETc4r 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76686 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76686 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76686 ']' 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.723 02:41:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.723 [2024-12-07 02:41:54.756409] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:43.723 [2024-12-07 02:41:54.756639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76686 ] 00:08:43.983 [2024-12-07 02:41:54.923295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.983 [2024-12-07 02:41:54.994211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.242 [2024-12-07 02:41:55.071866] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.242 [2024-12-07 02:41:55.071918] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.810 BaseBdev1_malloc 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.810 true 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.810 [2024-12-07 02:41:55.626738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:44.810 [2024-12-07 02:41:55.626854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.810 [2024-12-07 02:41:55.626876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:44.810 [2024-12-07 02:41:55.626892] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.810 [2024-12-07 02:41:55.629380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.810 [2024-12-07 02:41:55.629416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:44.810 BaseBdev1 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.810 BaseBdev2_malloc 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.810 true 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.810 [2024-12-07 02:41:55.682885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:44.810 [2024-12-07 02:41:55.682936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.810 [2024-12-07 02:41:55.682955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:44.810 [2024-12-07 02:41:55.682963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.810 [2024-12-07 02:41:55.685293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.810 [2024-12-07 02:41:55.685382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:44.810 BaseBdev2 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.810 BaseBdev3_malloc 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.810 true 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.810 [2024-12-07 02:41:55.729365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:44.810 [2024-12-07 02:41:55.729412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.810 [2024-12-07 02:41:55.729432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:44.810 [2024-12-07 02:41:55.729440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.810 [2024-12-07 02:41:55.731756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.810 [2024-12-07 02:41:55.731789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:44.810 BaseBdev3 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.810 [2024-12-07 02:41:55.741410] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.810 [2024-12-07 02:41:55.743486] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.810 [2024-12-07 02:41:55.743660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:44.810 [2024-12-07 02:41:55.743845] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:44.810 [2024-12-07 02:41:55.743863] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:44.810 [2024-12-07 02:41:55.744112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:44.810 [2024-12-07 02:41:55.744241] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:44.810 [2024-12-07 02:41:55.744251] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:44.810 [2024-12-07 02:41:55.744390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.810 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.811 "name": "raid_bdev1", 00:08:44.811 "uuid": "527370fd-1aa0-49f2-aaf5-7742ed140173", 00:08:44.811 "strip_size_kb": 64, 00:08:44.811 "state": "online", 00:08:44.811 "raid_level": "raid0", 00:08:44.811 "superblock": true, 00:08:44.811 "num_base_bdevs": 3, 00:08:44.811 "num_base_bdevs_discovered": 3, 00:08:44.811 "num_base_bdevs_operational": 3, 00:08:44.811 "base_bdevs_list": [ 00:08:44.811 { 00:08:44.811 "name": "BaseBdev1", 00:08:44.811 "uuid": "6123c448-813f-5ff1-a5d6-8813af3df375", 00:08:44.811 "is_configured": true, 00:08:44.811 "data_offset": 2048, 00:08:44.811 "data_size": 63488 00:08:44.811 }, 00:08:44.811 { 00:08:44.811 "name": "BaseBdev2", 00:08:44.811 "uuid": "862078d7-ccce-5b2e-999b-fa4fdb0cda93", 00:08:44.811 "is_configured": true, 00:08:44.811 "data_offset": 2048, 00:08:44.811 "data_size": 63488 00:08:44.811 }, 00:08:44.811 { 00:08:44.811 "name": "BaseBdev3", 00:08:44.811 "uuid": "75396874-bd56-5272-858d-9b8b889bf5bc", 00:08:44.811 "is_configured": true, 00:08:44.811 "data_offset": 2048, 00:08:44.811 "data_size": 63488 00:08:44.811 } 00:08:44.811 ] 00:08:44.811 }' 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.811 02:41:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.378 02:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:45.378 02:41:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:45.378 [2024-12-07 02:41:56.261007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.318 "name": "raid_bdev1", 00:08:46.318 "uuid": "527370fd-1aa0-49f2-aaf5-7742ed140173", 00:08:46.318 "strip_size_kb": 64, 00:08:46.318 "state": "online", 00:08:46.318 "raid_level": "raid0", 00:08:46.318 "superblock": true, 00:08:46.318 "num_base_bdevs": 3, 00:08:46.318 "num_base_bdevs_discovered": 3, 00:08:46.318 "num_base_bdevs_operational": 3, 00:08:46.318 "base_bdevs_list": [ 00:08:46.318 { 00:08:46.318 "name": "BaseBdev1", 00:08:46.318 "uuid": "6123c448-813f-5ff1-a5d6-8813af3df375", 00:08:46.318 "is_configured": true, 00:08:46.318 "data_offset": 2048, 00:08:46.318 "data_size": 63488 00:08:46.318 }, 00:08:46.318 { 00:08:46.318 "name": "BaseBdev2", 00:08:46.318 "uuid": "862078d7-ccce-5b2e-999b-fa4fdb0cda93", 00:08:46.318 "is_configured": true, 00:08:46.318 "data_offset": 2048, 00:08:46.318 "data_size": 63488 00:08:46.318 }, 00:08:46.318 { 00:08:46.318 "name": "BaseBdev3", 00:08:46.318 "uuid": "75396874-bd56-5272-858d-9b8b889bf5bc", 00:08:46.318 "is_configured": true, 00:08:46.318 "data_offset": 2048, 00:08:46.318 "data_size": 63488 00:08:46.318 } 00:08:46.318 ] 00:08:46.318 }' 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.318 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.579 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:46.579 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.579 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.579 [2024-12-07 02:41:57.645319] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:46.579 [2024-12-07 02:41:57.645365] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.579 [2024-12-07 02:41:57.647820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.579 [2024-12-07 02:41:57.647872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.579 [2024-12-07 02:41:57.647910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:46.579 [2024-12-07 02:41:57.647923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:46.579 { 00:08:46.579 "results": [ 00:08:46.579 { 00:08:46.579 "job": "raid_bdev1", 00:08:46.579 "core_mask": "0x1", 00:08:46.579 "workload": "randrw", 00:08:46.579 "percentage": 50, 00:08:46.579 "status": "finished", 00:08:46.579 "queue_depth": 1, 00:08:46.579 "io_size": 131072, 00:08:46.579 "runtime": 1.384829, 00:08:46.579 "iops": 15064.675855286105, 00:08:46.579 "mibps": 1883.084481910763, 00:08:46.579 "io_failed": 1, 00:08:46.579 "io_timeout": 0, 00:08:46.579 "avg_latency_us": 93.15706764048345, 00:08:46.579 "min_latency_us": 20.79301310043668, 00:08:46.579 "max_latency_us": 1323.598253275109 00:08:46.579 } 00:08:46.579 ], 00:08:46.579 "core_count": 1 00:08:46.579 } 00:08:46.579 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.579 02:41:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76686 00:08:46.579 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76686 ']' 00:08:46.579 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76686 00:08:46.579 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:46.839 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.839 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76686 00:08:46.840 killing process with pid 76686 00:08:46.840 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:46.840 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:46.840 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76686' 00:08:46.840 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76686 00:08:46.840 02:41:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76686 00:08:46.840 [2024-12-07 02:41:57.695285] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.840 [2024-12-07 02:41:57.742166] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.100 02:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.I3p9YETc4r 00:08:47.100 02:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:47.100 02:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:47.100 02:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:47.100 02:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:47.100 02:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.100 02:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:47.100 ************************************ 00:08:47.100 END TEST raid_read_error_test 00:08:47.100 ************************************ 00:08:47.100 02:41:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:47.100 00:08:47.100 real 0m3.474s 00:08:47.100 user 0m4.226s 00:08:47.100 sys 0m0.665s 00:08:47.100 02:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.100 02:41:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.361 02:41:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:47.361 02:41:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:47.361 02:41:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.361 02:41:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.361 ************************************ 00:08:47.361 START TEST raid_write_error_test 00:08:47.361 ************************************ 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.slwiPTpG9W 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76822 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76822 00:08:47.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76822 ']' 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.361 02:41:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.361 [2024-12-07 02:41:58.300420] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:47.361 [2024-12-07 02:41:58.300563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76822 ] 00:08:47.621 [2024-12-07 02:41:58.460754] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.621 [2024-12-07 02:41:58.533335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.621 [2024-12-07 02:41:58.610490] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.621 [2024-12-07 02:41:58.610529] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.191 BaseBdev1_malloc 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.191 true 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.191 [2024-12-07 02:41:59.152918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:48.191 [2024-12-07 02:41:59.153043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.191 [2024-12-07 02:41:59.153080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:48.191 [2024-12-07 02:41:59.153110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.191 [2024-12-07 02:41:59.155468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.191 [2024-12-07 02:41:59.155533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:48.191 BaseBdev1 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.191 BaseBdev2_malloc 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.191 true 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.191 [2024-12-07 02:41:59.215282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:48.191 [2024-12-07 02:41:59.215417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.191 [2024-12-07 02:41:59.215449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:48.191 [2024-12-07 02:41:59.215462] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.191 [2024-12-07 02:41:59.218637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.191 [2024-12-07 02:41:59.218679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:48.191 BaseBdev2 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.191 BaseBdev3_malloc 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.191 true 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.191 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.191 [2024-12-07 02:41:59.261658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:48.191 [2024-12-07 02:41:59.261757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.191 [2024-12-07 02:41:59.261782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:48.191 [2024-12-07 02:41:59.261791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.191 [2024-12-07 02:41:59.264306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.191 [2024-12-07 02:41:59.264340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:48.452 BaseBdev3 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.452 [2024-12-07 02:41:59.273703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.452 [2024-12-07 02:41:59.275853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.452 [2024-12-07 02:41:59.275938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.452 [2024-12-07 02:41:59.276120] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:48.452 [2024-12-07 02:41:59.276135] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.452 [2024-12-07 02:41:59.276398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:48.452 [2024-12-07 02:41:59.276543] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:48.452 [2024-12-07 02:41:59.276553] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:08:48.452 [2024-12-07 02:41:59.276705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.452 "name": "raid_bdev1", 00:08:48.452 "uuid": "81d7c8c5-125d-405f-96af-85a63f51641e", 00:08:48.452 "strip_size_kb": 64, 00:08:48.452 "state": "online", 00:08:48.452 "raid_level": "raid0", 00:08:48.452 "superblock": true, 00:08:48.452 "num_base_bdevs": 3, 00:08:48.452 "num_base_bdevs_discovered": 3, 00:08:48.452 "num_base_bdevs_operational": 3, 00:08:48.452 "base_bdevs_list": [ 00:08:48.452 { 00:08:48.452 "name": "BaseBdev1", 00:08:48.452 "uuid": "0dc2e7ac-9a0f-5dc6-aeb0-a7ef5a6dc13c", 00:08:48.452 "is_configured": true, 00:08:48.452 "data_offset": 2048, 00:08:48.452 "data_size": 63488 00:08:48.452 }, 00:08:48.452 { 00:08:48.452 "name": "BaseBdev2", 00:08:48.452 "uuid": "16726519-ee23-580e-8112-cbf13eccdea4", 00:08:48.452 "is_configured": true, 00:08:48.452 "data_offset": 2048, 00:08:48.452 "data_size": 63488 00:08:48.452 }, 00:08:48.452 { 00:08:48.452 "name": "BaseBdev3", 00:08:48.452 "uuid": "221f229c-3710-59a1-a9c8-1600824398ae", 00:08:48.452 "is_configured": true, 00:08:48.452 "data_offset": 2048, 00:08:48.452 "data_size": 63488 00:08:48.452 } 00:08:48.452 ] 00:08:48.452 }' 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.452 02:41:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.712 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:48.712 02:41:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:48.972 [2024-12-07 02:41:59.849150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.912 "name": "raid_bdev1", 00:08:49.912 "uuid": "81d7c8c5-125d-405f-96af-85a63f51641e", 00:08:49.912 "strip_size_kb": 64, 00:08:49.912 "state": "online", 00:08:49.912 "raid_level": "raid0", 00:08:49.912 "superblock": true, 00:08:49.912 "num_base_bdevs": 3, 00:08:49.912 "num_base_bdevs_discovered": 3, 00:08:49.912 "num_base_bdevs_operational": 3, 00:08:49.912 "base_bdevs_list": [ 00:08:49.912 { 00:08:49.912 "name": "BaseBdev1", 00:08:49.912 "uuid": "0dc2e7ac-9a0f-5dc6-aeb0-a7ef5a6dc13c", 00:08:49.912 "is_configured": true, 00:08:49.912 "data_offset": 2048, 00:08:49.912 "data_size": 63488 00:08:49.912 }, 00:08:49.912 { 00:08:49.912 "name": "BaseBdev2", 00:08:49.912 "uuid": "16726519-ee23-580e-8112-cbf13eccdea4", 00:08:49.912 "is_configured": true, 00:08:49.912 "data_offset": 2048, 00:08:49.912 "data_size": 63488 00:08:49.912 }, 00:08:49.912 { 00:08:49.912 "name": "BaseBdev3", 00:08:49.912 "uuid": "221f229c-3710-59a1-a9c8-1600824398ae", 00:08:49.912 "is_configured": true, 00:08:49.912 "data_offset": 2048, 00:08:49.912 "data_size": 63488 00:08:49.912 } 00:08:49.912 ] 00:08:49.912 }' 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.912 02:42:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.481 [2024-12-07 02:42:01.261576] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.481 [2024-12-07 02:42:01.261708] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.481 [2024-12-07 02:42:01.264319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.481 [2024-12-07 02:42:01.264431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.481 [2024-12-07 02:42:01.264495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.481 [2024-12-07 02:42:01.264545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:08:50.481 { 00:08:50.481 "results": [ 00:08:50.481 { 00:08:50.481 "job": "raid_bdev1", 00:08:50.481 "core_mask": "0x1", 00:08:50.481 "workload": "randrw", 00:08:50.481 "percentage": 50, 00:08:50.481 "status": "finished", 00:08:50.481 "queue_depth": 1, 00:08:50.481 "io_size": 131072, 00:08:50.481 "runtime": 1.413189, 00:08:50.481 "iops": 15162.869226975303, 00:08:50.481 "mibps": 1895.3586533719129, 00:08:50.481 "io_failed": 1, 00:08:50.481 "io_timeout": 0, 00:08:50.481 "avg_latency_us": 92.55446895720017, 00:08:50.481 "min_latency_us": 20.68122270742358, 00:08:50.481 "max_latency_us": 1352.216593886463 00:08:50.481 } 00:08:50.481 ], 00:08:50.481 "core_count": 1 00:08:50.481 } 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76822 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76822 ']' 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76822 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76822 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76822' 00:08:50.481 killing process with pid 76822 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76822 00:08:50.481 [2024-12-07 02:42:01.313496] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.481 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76822 00:08:50.481 [2024-12-07 02:42:01.361673] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.740 02:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:50.740 02:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.slwiPTpG9W 00:08:50.740 02:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:50.740 02:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:50.740 02:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:50.740 ************************************ 00:08:50.740 END TEST raid_write_error_test 00:08:50.740 ************************************ 00:08:50.740 02:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.740 02:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:50.740 02:42:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:50.740 00:08:50.740 real 0m3.543s 00:08:50.740 user 0m4.362s 00:08:50.740 sys 0m0.651s 00:08:50.740 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.740 02:42:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.741 02:42:01 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:50.741 02:42:01 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:50.741 02:42:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:50.741 02:42:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.741 02:42:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.741 ************************************ 00:08:50.741 START TEST raid_state_function_test 00:08:50.741 ************************************ 00:08:50.741 02:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:50.741 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:50.741 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:50.741 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:50.741 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:51.000 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:51.001 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:51.001 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:51.001 Process raid pid: 76954 00:08:51.001 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76954 00:08:51.001 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:51.001 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76954' 00:08:51.001 02:42:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76954 00:08:51.001 02:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76954 ']' 00:08:51.001 02:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.001 02:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.001 02:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.001 02:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.001 02:42:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.001 [2024-12-07 02:42:01.916372] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:08:51.001 [2024-12-07 02:42:01.916532] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.260 [2024-12-07 02:42:02.081196] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.260 [2024-12-07 02:42:02.151419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.260 [2024-12-07 02:42:02.229483] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.260 [2024-12-07 02:42:02.229527] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.830 [2024-12-07 02:42:02.773139] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:51.830 [2024-12-07 02:42:02.773282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:51.830 [2024-12-07 02:42:02.773302] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:51.830 [2024-12-07 02:42:02.773313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:51.830 [2024-12-07 02:42:02.773319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:51.830 [2024-12-07 02:42:02.773331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.830 "name": "Existed_Raid", 00:08:51.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.830 "strip_size_kb": 64, 00:08:51.830 "state": "configuring", 00:08:51.830 "raid_level": "concat", 00:08:51.830 "superblock": false, 00:08:51.830 "num_base_bdevs": 3, 00:08:51.830 "num_base_bdevs_discovered": 0, 00:08:51.830 "num_base_bdevs_operational": 3, 00:08:51.830 "base_bdevs_list": [ 00:08:51.830 { 00:08:51.830 "name": "BaseBdev1", 00:08:51.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.830 "is_configured": false, 00:08:51.830 "data_offset": 0, 00:08:51.830 "data_size": 0 00:08:51.830 }, 00:08:51.830 { 00:08:51.830 "name": "BaseBdev2", 00:08:51.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.830 "is_configured": false, 00:08:51.830 "data_offset": 0, 00:08:51.830 "data_size": 0 00:08:51.830 }, 00:08:51.830 { 00:08:51.830 "name": "BaseBdev3", 00:08:51.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.830 "is_configured": false, 00:08:51.830 "data_offset": 0, 00:08:51.830 "data_size": 0 00:08:51.830 } 00:08:51.830 ] 00:08:51.830 }' 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.830 02:42:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.399 [2024-12-07 02:42:03.232260] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:52.399 [2024-12-07 02:42:03.232361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.399 [2024-12-07 02:42:03.244279] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:52.399 [2024-12-07 02:42:03.244363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:52.399 [2024-12-07 02:42:03.244389] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:52.399 [2024-12-07 02:42:03.244412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:52.399 [2024-12-07 02:42:03.244430] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:52.399 [2024-12-07 02:42:03.244452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.399 [2024-12-07 02:42:03.271192] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.399 BaseBdev1 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.399 [ 00:08:52.399 { 00:08:52.399 "name": "BaseBdev1", 00:08:52.399 "aliases": [ 00:08:52.399 "0ed22553-33e7-49be-975d-ff6e1dd82d5f" 00:08:52.399 ], 00:08:52.399 "product_name": "Malloc disk", 00:08:52.399 "block_size": 512, 00:08:52.399 "num_blocks": 65536, 00:08:52.399 "uuid": "0ed22553-33e7-49be-975d-ff6e1dd82d5f", 00:08:52.399 "assigned_rate_limits": { 00:08:52.399 "rw_ios_per_sec": 0, 00:08:52.399 "rw_mbytes_per_sec": 0, 00:08:52.399 "r_mbytes_per_sec": 0, 00:08:52.399 "w_mbytes_per_sec": 0 00:08:52.399 }, 00:08:52.399 "claimed": true, 00:08:52.399 "claim_type": "exclusive_write", 00:08:52.399 "zoned": false, 00:08:52.399 "supported_io_types": { 00:08:52.399 "read": true, 00:08:52.399 "write": true, 00:08:52.399 "unmap": true, 00:08:52.399 "flush": true, 00:08:52.399 "reset": true, 00:08:52.399 "nvme_admin": false, 00:08:52.399 "nvme_io": false, 00:08:52.399 "nvme_io_md": false, 00:08:52.399 "write_zeroes": true, 00:08:52.399 "zcopy": true, 00:08:52.399 "get_zone_info": false, 00:08:52.399 "zone_management": false, 00:08:52.399 "zone_append": false, 00:08:52.399 "compare": false, 00:08:52.399 "compare_and_write": false, 00:08:52.399 "abort": true, 00:08:52.399 "seek_hole": false, 00:08:52.399 "seek_data": false, 00:08:52.399 "copy": true, 00:08:52.399 "nvme_iov_md": false 00:08:52.399 }, 00:08:52.399 "memory_domains": [ 00:08:52.399 { 00:08:52.399 "dma_device_id": "system", 00:08:52.399 "dma_device_type": 1 00:08:52.399 }, 00:08:52.399 { 00:08:52.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.399 "dma_device_type": 2 00:08:52.399 } 00:08:52.399 ], 00:08:52.399 "driver_specific": {} 00:08:52.399 } 00:08:52.399 ] 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:52.399 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.400 "name": "Existed_Raid", 00:08:52.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.400 "strip_size_kb": 64, 00:08:52.400 "state": "configuring", 00:08:52.400 "raid_level": "concat", 00:08:52.400 "superblock": false, 00:08:52.400 "num_base_bdevs": 3, 00:08:52.400 "num_base_bdevs_discovered": 1, 00:08:52.400 "num_base_bdevs_operational": 3, 00:08:52.400 "base_bdevs_list": [ 00:08:52.400 { 00:08:52.400 "name": "BaseBdev1", 00:08:52.400 "uuid": "0ed22553-33e7-49be-975d-ff6e1dd82d5f", 00:08:52.400 "is_configured": true, 00:08:52.400 "data_offset": 0, 00:08:52.400 "data_size": 65536 00:08:52.400 }, 00:08:52.400 { 00:08:52.400 "name": "BaseBdev2", 00:08:52.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.400 "is_configured": false, 00:08:52.400 "data_offset": 0, 00:08:52.400 "data_size": 0 00:08:52.400 }, 00:08:52.400 { 00:08:52.400 "name": "BaseBdev3", 00:08:52.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.400 "is_configured": false, 00:08:52.400 "data_offset": 0, 00:08:52.400 "data_size": 0 00:08:52.400 } 00:08:52.400 ] 00:08:52.400 }' 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.400 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.004 [2024-12-07 02:42:03.786364] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.004 [2024-12-07 02:42:03.786424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.004 [2024-12-07 02:42:03.798366] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.004 [2024-12-07 02:42:03.800517] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.004 [2024-12-07 02:42:03.800560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.004 [2024-12-07 02:42:03.800570] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.004 [2024-12-07 02:42:03.800594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.004 "name": "Existed_Raid", 00:08:53.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.004 "strip_size_kb": 64, 00:08:53.004 "state": "configuring", 00:08:53.004 "raid_level": "concat", 00:08:53.004 "superblock": false, 00:08:53.004 "num_base_bdevs": 3, 00:08:53.004 "num_base_bdevs_discovered": 1, 00:08:53.004 "num_base_bdevs_operational": 3, 00:08:53.004 "base_bdevs_list": [ 00:08:53.004 { 00:08:53.004 "name": "BaseBdev1", 00:08:53.004 "uuid": "0ed22553-33e7-49be-975d-ff6e1dd82d5f", 00:08:53.004 "is_configured": true, 00:08:53.004 "data_offset": 0, 00:08:53.004 "data_size": 65536 00:08:53.004 }, 00:08:53.004 { 00:08:53.004 "name": "BaseBdev2", 00:08:53.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.004 "is_configured": false, 00:08:53.004 "data_offset": 0, 00:08:53.004 "data_size": 0 00:08:53.004 }, 00:08:53.004 { 00:08:53.004 "name": "BaseBdev3", 00:08:53.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.004 "is_configured": false, 00:08:53.004 "data_offset": 0, 00:08:53.004 "data_size": 0 00:08:53.004 } 00:08:53.004 ] 00:08:53.004 }' 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.004 02:42:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.263 [2024-12-07 02:42:04.271911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.263 BaseBdev2 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.263 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.263 [ 00:08:53.263 { 00:08:53.263 "name": "BaseBdev2", 00:08:53.263 "aliases": [ 00:08:53.263 "8361de60-c379-42d5-82af-2a38f83d0d21" 00:08:53.263 ], 00:08:53.263 "product_name": "Malloc disk", 00:08:53.263 "block_size": 512, 00:08:53.263 "num_blocks": 65536, 00:08:53.263 "uuid": "8361de60-c379-42d5-82af-2a38f83d0d21", 00:08:53.263 "assigned_rate_limits": { 00:08:53.263 "rw_ios_per_sec": 0, 00:08:53.263 "rw_mbytes_per_sec": 0, 00:08:53.263 "r_mbytes_per_sec": 0, 00:08:53.263 "w_mbytes_per_sec": 0 00:08:53.263 }, 00:08:53.263 "claimed": true, 00:08:53.263 "claim_type": "exclusive_write", 00:08:53.263 "zoned": false, 00:08:53.263 "supported_io_types": { 00:08:53.264 "read": true, 00:08:53.264 "write": true, 00:08:53.264 "unmap": true, 00:08:53.264 "flush": true, 00:08:53.264 "reset": true, 00:08:53.264 "nvme_admin": false, 00:08:53.264 "nvme_io": false, 00:08:53.264 "nvme_io_md": false, 00:08:53.264 "write_zeroes": true, 00:08:53.264 "zcopy": true, 00:08:53.264 "get_zone_info": false, 00:08:53.264 "zone_management": false, 00:08:53.264 "zone_append": false, 00:08:53.264 "compare": false, 00:08:53.264 "compare_and_write": false, 00:08:53.264 "abort": true, 00:08:53.264 "seek_hole": false, 00:08:53.264 "seek_data": false, 00:08:53.264 "copy": true, 00:08:53.264 "nvme_iov_md": false 00:08:53.264 }, 00:08:53.264 "memory_domains": [ 00:08:53.264 { 00:08:53.264 "dma_device_id": "system", 00:08:53.264 "dma_device_type": 1 00:08:53.264 }, 00:08:53.264 { 00:08:53.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.264 "dma_device_type": 2 00:08:53.264 } 00:08:53.264 ], 00:08:53.264 "driver_specific": {} 00:08:53.264 } 00:08:53.264 ] 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.264 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.523 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.523 "name": "Existed_Raid", 00:08:53.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.523 "strip_size_kb": 64, 00:08:53.523 "state": "configuring", 00:08:53.523 "raid_level": "concat", 00:08:53.523 "superblock": false, 00:08:53.523 "num_base_bdevs": 3, 00:08:53.523 "num_base_bdevs_discovered": 2, 00:08:53.523 "num_base_bdevs_operational": 3, 00:08:53.523 "base_bdevs_list": [ 00:08:53.523 { 00:08:53.523 "name": "BaseBdev1", 00:08:53.523 "uuid": "0ed22553-33e7-49be-975d-ff6e1dd82d5f", 00:08:53.523 "is_configured": true, 00:08:53.523 "data_offset": 0, 00:08:53.523 "data_size": 65536 00:08:53.523 }, 00:08:53.523 { 00:08:53.523 "name": "BaseBdev2", 00:08:53.523 "uuid": "8361de60-c379-42d5-82af-2a38f83d0d21", 00:08:53.523 "is_configured": true, 00:08:53.523 "data_offset": 0, 00:08:53.523 "data_size": 65536 00:08:53.523 }, 00:08:53.523 { 00:08:53.523 "name": "BaseBdev3", 00:08:53.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.523 "is_configured": false, 00:08:53.523 "data_offset": 0, 00:08:53.523 "data_size": 0 00:08:53.523 } 00:08:53.523 ] 00:08:53.523 }' 00:08:53.523 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.523 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.804 [2024-12-07 02:42:04.791873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.804 [2024-12-07 02:42:04.792008] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:08:53.804 [2024-12-07 02:42:04.792037] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:53.804 [2024-12-07 02:42:04.792416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:08:53.804 [2024-12-07 02:42:04.792615] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:08:53.804 [2024-12-07 02:42:04.792654] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:08:53.804 [2024-12-07 02:42:04.792910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.804 BaseBdev3 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.804 [ 00:08:53.804 { 00:08:53.804 "name": "BaseBdev3", 00:08:53.804 "aliases": [ 00:08:53.804 "b9f8b48d-e386-4bff-ac8e-8dcdf0b3d538" 00:08:53.804 ], 00:08:53.804 "product_name": "Malloc disk", 00:08:53.804 "block_size": 512, 00:08:53.804 "num_blocks": 65536, 00:08:53.804 "uuid": "b9f8b48d-e386-4bff-ac8e-8dcdf0b3d538", 00:08:53.804 "assigned_rate_limits": { 00:08:53.804 "rw_ios_per_sec": 0, 00:08:53.804 "rw_mbytes_per_sec": 0, 00:08:53.804 "r_mbytes_per_sec": 0, 00:08:53.804 "w_mbytes_per_sec": 0 00:08:53.804 }, 00:08:53.804 "claimed": true, 00:08:53.804 "claim_type": "exclusive_write", 00:08:53.804 "zoned": false, 00:08:53.804 "supported_io_types": { 00:08:53.804 "read": true, 00:08:53.804 "write": true, 00:08:53.804 "unmap": true, 00:08:53.804 "flush": true, 00:08:53.804 "reset": true, 00:08:53.804 "nvme_admin": false, 00:08:53.804 "nvme_io": false, 00:08:53.804 "nvme_io_md": false, 00:08:53.804 "write_zeroes": true, 00:08:53.804 "zcopy": true, 00:08:53.804 "get_zone_info": false, 00:08:53.804 "zone_management": false, 00:08:53.804 "zone_append": false, 00:08:53.804 "compare": false, 00:08:53.804 "compare_and_write": false, 00:08:53.804 "abort": true, 00:08:53.804 "seek_hole": false, 00:08:53.804 "seek_data": false, 00:08:53.804 "copy": true, 00:08:53.804 "nvme_iov_md": false 00:08:53.804 }, 00:08:53.804 "memory_domains": [ 00:08:53.804 { 00:08:53.804 "dma_device_id": "system", 00:08:53.804 "dma_device_type": 1 00:08:53.804 }, 00:08:53.804 { 00:08:53.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.804 "dma_device_type": 2 00:08:53.804 } 00:08:53.804 ], 00:08:53.804 "driver_specific": {} 00:08:53.804 } 00:08:53.804 ] 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.804 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.063 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.063 "name": "Existed_Raid", 00:08:54.063 "uuid": "4c81d8f6-24e5-47eb-90da-f45c29c1ba4a", 00:08:54.063 "strip_size_kb": 64, 00:08:54.063 "state": "online", 00:08:54.063 "raid_level": "concat", 00:08:54.063 "superblock": false, 00:08:54.063 "num_base_bdevs": 3, 00:08:54.063 "num_base_bdevs_discovered": 3, 00:08:54.063 "num_base_bdevs_operational": 3, 00:08:54.063 "base_bdevs_list": [ 00:08:54.063 { 00:08:54.063 "name": "BaseBdev1", 00:08:54.063 "uuid": "0ed22553-33e7-49be-975d-ff6e1dd82d5f", 00:08:54.063 "is_configured": true, 00:08:54.063 "data_offset": 0, 00:08:54.063 "data_size": 65536 00:08:54.063 }, 00:08:54.063 { 00:08:54.063 "name": "BaseBdev2", 00:08:54.063 "uuid": "8361de60-c379-42d5-82af-2a38f83d0d21", 00:08:54.063 "is_configured": true, 00:08:54.063 "data_offset": 0, 00:08:54.063 "data_size": 65536 00:08:54.063 }, 00:08:54.063 { 00:08:54.063 "name": "BaseBdev3", 00:08:54.063 "uuid": "b9f8b48d-e386-4bff-ac8e-8dcdf0b3d538", 00:08:54.063 "is_configured": true, 00:08:54.063 "data_offset": 0, 00:08:54.063 "data_size": 65536 00:08:54.063 } 00:08:54.063 ] 00:08:54.063 }' 00:08:54.063 02:42:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.063 02:42:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.322 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:54.322 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:54.322 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.322 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.322 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.322 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.322 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.323 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:54.323 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.323 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.323 [2024-12-07 02:42:05.279392] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.323 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.323 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.323 "name": "Existed_Raid", 00:08:54.323 "aliases": [ 00:08:54.323 "4c81d8f6-24e5-47eb-90da-f45c29c1ba4a" 00:08:54.323 ], 00:08:54.323 "product_name": "Raid Volume", 00:08:54.323 "block_size": 512, 00:08:54.323 "num_blocks": 196608, 00:08:54.323 "uuid": "4c81d8f6-24e5-47eb-90da-f45c29c1ba4a", 00:08:54.323 "assigned_rate_limits": { 00:08:54.323 "rw_ios_per_sec": 0, 00:08:54.323 "rw_mbytes_per_sec": 0, 00:08:54.323 "r_mbytes_per_sec": 0, 00:08:54.323 "w_mbytes_per_sec": 0 00:08:54.323 }, 00:08:54.323 "claimed": false, 00:08:54.323 "zoned": false, 00:08:54.323 "supported_io_types": { 00:08:54.323 "read": true, 00:08:54.323 "write": true, 00:08:54.323 "unmap": true, 00:08:54.323 "flush": true, 00:08:54.323 "reset": true, 00:08:54.323 "nvme_admin": false, 00:08:54.323 "nvme_io": false, 00:08:54.323 "nvme_io_md": false, 00:08:54.323 "write_zeroes": true, 00:08:54.323 "zcopy": false, 00:08:54.323 "get_zone_info": false, 00:08:54.323 "zone_management": false, 00:08:54.323 "zone_append": false, 00:08:54.323 "compare": false, 00:08:54.323 "compare_and_write": false, 00:08:54.323 "abort": false, 00:08:54.323 "seek_hole": false, 00:08:54.323 "seek_data": false, 00:08:54.323 "copy": false, 00:08:54.323 "nvme_iov_md": false 00:08:54.323 }, 00:08:54.323 "memory_domains": [ 00:08:54.323 { 00:08:54.323 "dma_device_id": "system", 00:08:54.323 "dma_device_type": 1 00:08:54.323 }, 00:08:54.323 { 00:08:54.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.323 "dma_device_type": 2 00:08:54.323 }, 00:08:54.323 { 00:08:54.323 "dma_device_id": "system", 00:08:54.323 "dma_device_type": 1 00:08:54.323 }, 00:08:54.323 { 00:08:54.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.323 "dma_device_type": 2 00:08:54.323 }, 00:08:54.323 { 00:08:54.323 "dma_device_id": "system", 00:08:54.323 "dma_device_type": 1 00:08:54.323 }, 00:08:54.323 { 00:08:54.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.323 "dma_device_type": 2 00:08:54.323 } 00:08:54.323 ], 00:08:54.323 "driver_specific": { 00:08:54.323 "raid": { 00:08:54.323 "uuid": "4c81d8f6-24e5-47eb-90da-f45c29c1ba4a", 00:08:54.323 "strip_size_kb": 64, 00:08:54.323 "state": "online", 00:08:54.323 "raid_level": "concat", 00:08:54.323 "superblock": false, 00:08:54.323 "num_base_bdevs": 3, 00:08:54.323 "num_base_bdevs_discovered": 3, 00:08:54.323 "num_base_bdevs_operational": 3, 00:08:54.323 "base_bdevs_list": [ 00:08:54.323 { 00:08:54.323 "name": "BaseBdev1", 00:08:54.323 "uuid": "0ed22553-33e7-49be-975d-ff6e1dd82d5f", 00:08:54.323 "is_configured": true, 00:08:54.323 "data_offset": 0, 00:08:54.323 "data_size": 65536 00:08:54.323 }, 00:08:54.323 { 00:08:54.323 "name": "BaseBdev2", 00:08:54.323 "uuid": "8361de60-c379-42d5-82af-2a38f83d0d21", 00:08:54.323 "is_configured": true, 00:08:54.323 "data_offset": 0, 00:08:54.323 "data_size": 65536 00:08:54.323 }, 00:08:54.323 { 00:08:54.323 "name": "BaseBdev3", 00:08:54.323 "uuid": "b9f8b48d-e386-4bff-ac8e-8dcdf0b3d538", 00:08:54.323 "is_configured": true, 00:08:54.323 "data_offset": 0, 00:08:54.323 "data_size": 65536 00:08:54.323 } 00:08:54.323 ] 00:08:54.323 } 00:08:54.323 } 00:08:54.323 }' 00:08:54.323 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.323 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:54.323 BaseBdev2 00:08:54.323 BaseBdev3' 00:08:54.323 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.582 [2024-12-07 02:42:05.546715] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:54.582 [2024-12-07 02:42:05.546743] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.582 [2024-12-07 02:42:05.546805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:54.582 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.583 "name": "Existed_Raid", 00:08:54.583 "uuid": "4c81d8f6-24e5-47eb-90da-f45c29c1ba4a", 00:08:54.583 "strip_size_kb": 64, 00:08:54.583 "state": "offline", 00:08:54.583 "raid_level": "concat", 00:08:54.583 "superblock": false, 00:08:54.583 "num_base_bdevs": 3, 00:08:54.583 "num_base_bdevs_discovered": 2, 00:08:54.583 "num_base_bdevs_operational": 2, 00:08:54.583 "base_bdevs_list": [ 00:08:54.583 { 00:08:54.583 "name": null, 00:08:54.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.583 "is_configured": false, 00:08:54.583 "data_offset": 0, 00:08:54.583 "data_size": 65536 00:08:54.583 }, 00:08:54.583 { 00:08:54.583 "name": "BaseBdev2", 00:08:54.583 "uuid": "8361de60-c379-42d5-82af-2a38f83d0d21", 00:08:54.583 "is_configured": true, 00:08:54.583 "data_offset": 0, 00:08:54.583 "data_size": 65536 00:08:54.583 }, 00:08:54.583 { 00:08:54.583 "name": "BaseBdev3", 00:08:54.583 "uuid": "b9f8b48d-e386-4bff-ac8e-8dcdf0b3d538", 00:08:54.583 "is_configured": true, 00:08:54.583 "data_offset": 0, 00:08:54.583 "data_size": 65536 00:08:54.583 } 00:08:54.583 ] 00:08:54.583 }' 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.583 02:42:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.150 [2024-12-07 02:42:06.074781] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.150 [2024-12-07 02:42:06.155185] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:55.150 [2024-12-07 02:42:06.155299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.150 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.409 BaseBdev2 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.409 [ 00:08:55.409 { 00:08:55.409 "name": "BaseBdev2", 00:08:55.409 "aliases": [ 00:08:55.409 "64a2ee60-18c1-4b22-a7b4-9d1ce0f89602" 00:08:55.409 ], 00:08:55.409 "product_name": "Malloc disk", 00:08:55.409 "block_size": 512, 00:08:55.409 "num_blocks": 65536, 00:08:55.409 "uuid": "64a2ee60-18c1-4b22-a7b4-9d1ce0f89602", 00:08:55.409 "assigned_rate_limits": { 00:08:55.409 "rw_ios_per_sec": 0, 00:08:55.409 "rw_mbytes_per_sec": 0, 00:08:55.409 "r_mbytes_per_sec": 0, 00:08:55.409 "w_mbytes_per_sec": 0 00:08:55.409 }, 00:08:55.409 "claimed": false, 00:08:55.409 "zoned": false, 00:08:55.409 "supported_io_types": { 00:08:55.409 "read": true, 00:08:55.409 "write": true, 00:08:55.409 "unmap": true, 00:08:55.409 "flush": true, 00:08:55.409 "reset": true, 00:08:55.409 "nvme_admin": false, 00:08:55.409 "nvme_io": false, 00:08:55.409 "nvme_io_md": false, 00:08:55.409 "write_zeroes": true, 00:08:55.409 "zcopy": true, 00:08:55.409 "get_zone_info": false, 00:08:55.409 "zone_management": false, 00:08:55.409 "zone_append": false, 00:08:55.409 "compare": false, 00:08:55.409 "compare_and_write": false, 00:08:55.409 "abort": true, 00:08:55.409 "seek_hole": false, 00:08:55.409 "seek_data": false, 00:08:55.409 "copy": true, 00:08:55.409 "nvme_iov_md": false 00:08:55.409 }, 00:08:55.409 "memory_domains": [ 00:08:55.409 { 00:08:55.409 "dma_device_id": "system", 00:08:55.409 "dma_device_type": 1 00:08:55.409 }, 00:08:55.409 { 00:08:55.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.409 "dma_device_type": 2 00:08:55.409 } 00:08:55.409 ], 00:08:55.409 "driver_specific": {} 00:08:55.409 } 00:08:55.409 ] 00:08:55.409 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.410 BaseBdev3 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.410 [ 00:08:55.410 { 00:08:55.410 "name": "BaseBdev3", 00:08:55.410 "aliases": [ 00:08:55.410 "8f6e6503-c396-4348-8cbf-7efc0cc57bfb" 00:08:55.410 ], 00:08:55.410 "product_name": "Malloc disk", 00:08:55.410 "block_size": 512, 00:08:55.410 "num_blocks": 65536, 00:08:55.410 "uuid": "8f6e6503-c396-4348-8cbf-7efc0cc57bfb", 00:08:55.410 "assigned_rate_limits": { 00:08:55.410 "rw_ios_per_sec": 0, 00:08:55.410 "rw_mbytes_per_sec": 0, 00:08:55.410 "r_mbytes_per_sec": 0, 00:08:55.410 "w_mbytes_per_sec": 0 00:08:55.410 }, 00:08:55.410 "claimed": false, 00:08:55.410 "zoned": false, 00:08:55.410 "supported_io_types": { 00:08:55.410 "read": true, 00:08:55.410 "write": true, 00:08:55.410 "unmap": true, 00:08:55.410 "flush": true, 00:08:55.410 "reset": true, 00:08:55.410 "nvme_admin": false, 00:08:55.410 "nvme_io": false, 00:08:55.410 "nvme_io_md": false, 00:08:55.410 "write_zeroes": true, 00:08:55.410 "zcopy": true, 00:08:55.410 "get_zone_info": false, 00:08:55.410 "zone_management": false, 00:08:55.410 "zone_append": false, 00:08:55.410 "compare": false, 00:08:55.410 "compare_and_write": false, 00:08:55.410 "abort": true, 00:08:55.410 "seek_hole": false, 00:08:55.410 "seek_data": false, 00:08:55.410 "copy": true, 00:08:55.410 "nvme_iov_md": false 00:08:55.410 }, 00:08:55.410 "memory_domains": [ 00:08:55.410 { 00:08:55.410 "dma_device_id": "system", 00:08:55.410 "dma_device_type": 1 00:08:55.410 }, 00:08:55.410 { 00:08:55.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.410 "dma_device_type": 2 00:08:55.410 } 00:08:55.410 ], 00:08:55.410 "driver_specific": {} 00:08:55.410 } 00:08:55.410 ] 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.410 [2024-12-07 02:42:06.354143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:55.410 [2024-12-07 02:42:06.354251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:55.410 [2024-12-07 02:42:06.354293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.410 [2024-12-07 02:42:06.356426] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.410 "name": "Existed_Raid", 00:08:55.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.410 "strip_size_kb": 64, 00:08:55.410 "state": "configuring", 00:08:55.410 "raid_level": "concat", 00:08:55.410 "superblock": false, 00:08:55.410 "num_base_bdevs": 3, 00:08:55.410 "num_base_bdevs_discovered": 2, 00:08:55.410 "num_base_bdevs_operational": 3, 00:08:55.410 "base_bdevs_list": [ 00:08:55.410 { 00:08:55.410 "name": "BaseBdev1", 00:08:55.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.410 "is_configured": false, 00:08:55.410 "data_offset": 0, 00:08:55.410 "data_size": 0 00:08:55.410 }, 00:08:55.410 { 00:08:55.410 "name": "BaseBdev2", 00:08:55.410 "uuid": "64a2ee60-18c1-4b22-a7b4-9d1ce0f89602", 00:08:55.410 "is_configured": true, 00:08:55.410 "data_offset": 0, 00:08:55.410 "data_size": 65536 00:08:55.410 }, 00:08:55.410 { 00:08:55.410 "name": "BaseBdev3", 00:08:55.410 "uuid": "8f6e6503-c396-4348-8cbf-7efc0cc57bfb", 00:08:55.410 "is_configured": true, 00:08:55.410 "data_offset": 0, 00:08:55.410 "data_size": 65536 00:08:55.410 } 00:08:55.410 ] 00:08:55.410 }' 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.410 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.977 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:55.977 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.977 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.977 [2024-12-07 02:42:06.805360] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:55.977 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.977 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.977 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.977 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.978 "name": "Existed_Raid", 00:08:55.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.978 "strip_size_kb": 64, 00:08:55.978 "state": "configuring", 00:08:55.978 "raid_level": "concat", 00:08:55.978 "superblock": false, 00:08:55.978 "num_base_bdevs": 3, 00:08:55.978 "num_base_bdevs_discovered": 1, 00:08:55.978 "num_base_bdevs_operational": 3, 00:08:55.978 "base_bdevs_list": [ 00:08:55.978 { 00:08:55.978 "name": "BaseBdev1", 00:08:55.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.978 "is_configured": false, 00:08:55.978 "data_offset": 0, 00:08:55.978 "data_size": 0 00:08:55.978 }, 00:08:55.978 { 00:08:55.978 "name": null, 00:08:55.978 "uuid": "64a2ee60-18c1-4b22-a7b4-9d1ce0f89602", 00:08:55.978 "is_configured": false, 00:08:55.978 "data_offset": 0, 00:08:55.978 "data_size": 65536 00:08:55.978 }, 00:08:55.978 { 00:08:55.978 "name": "BaseBdev3", 00:08:55.978 "uuid": "8f6e6503-c396-4348-8cbf-7efc0cc57bfb", 00:08:55.978 "is_configured": true, 00:08:55.978 "data_offset": 0, 00:08:55.978 "data_size": 65536 00:08:55.978 } 00:08:55.978 ] 00:08:55.978 }' 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.978 02:42:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.237 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.237 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.237 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.237 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:56.237 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.498 [2024-12-07 02:42:07.357556] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:56.498 BaseBdev1 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.498 [ 00:08:56.498 { 00:08:56.498 "name": "BaseBdev1", 00:08:56.498 "aliases": [ 00:08:56.498 "aeeefa9e-0862-4c65-87ff-2fbcaf56c44a" 00:08:56.498 ], 00:08:56.498 "product_name": "Malloc disk", 00:08:56.498 "block_size": 512, 00:08:56.498 "num_blocks": 65536, 00:08:56.498 "uuid": "aeeefa9e-0862-4c65-87ff-2fbcaf56c44a", 00:08:56.498 "assigned_rate_limits": { 00:08:56.498 "rw_ios_per_sec": 0, 00:08:56.498 "rw_mbytes_per_sec": 0, 00:08:56.498 "r_mbytes_per_sec": 0, 00:08:56.498 "w_mbytes_per_sec": 0 00:08:56.498 }, 00:08:56.498 "claimed": true, 00:08:56.498 "claim_type": "exclusive_write", 00:08:56.498 "zoned": false, 00:08:56.498 "supported_io_types": { 00:08:56.498 "read": true, 00:08:56.498 "write": true, 00:08:56.498 "unmap": true, 00:08:56.498 "flush": true, 00:08:56.498 "reset": true, 00:08:56.498 "nvme_admin": false, 00:08:56.498 "nvme_io": false, 00:08:56.498 "nvme_io_md": false, 00:08:56.498 "write_zeroes": true, 00:08:56.498 "zcopy": true, 00:08:56.498 "get_zone_info": false, 00:08:56.498 "zone_management": false, 00:08:56.498 "zone_append": false, 00:08:56.498 "compare": false, 00:08:56.498 "compare_and_write": false, 00:08:56.498 "abort": true, 00:08:56.498 "seek_hole": false, 00:08:56.498 "seek_data": false, 00:08:56.498 "copy": true, 00:08:56.498 "nvme_iov_md": false 00:08:56.498 }, 00:08:56.498 "memory_domains": [ 00:08:56.498 { 00:08:56.498 "dma_device_id": "system", 00:08:56.498 "dma_device_type": 1 00:08:56.498 }, 00:08:56.498 { 00:08:56.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.498 "dma_device_type": 2 00:08:56.498 } 00:08:56.498 ], 00:08:56.498 "driver_specific": {} 00:08:56.498 } 00:08:56.498 ] 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.498 "name": "Existed_Raid", 00:08:56.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.498 "strip_size_kb": 64, 00:08:56.498 "state": "configuring", 00:08:56.498 "raid_level": "concat", 00:08:56.498 "superblock": false, 00:08:56.498 "num_base_bdevs": 3, 00:08:56.498 "num_base_bdevs_discovered": 2, 00:08:56.498 "num_base_bdevs_operational": 3, 00:08:56.498 "base_bdevs_list": [ 00:08:56.498 { 00:08:56.498 "name": "BaseBdev1", 00:08:56.498 "uuid": "aeeefa9e-0862-4c65-87ff-2fbcaf56c44a", 00:08:56.498 "is_configured": true, 00:08:56.498 "data_offset": 0, 00:08:56.498 "data_size": 65536 00:08:56.498 }, 00:08:56.498 { 00:08:56.498 "name": null, 00:08:56.498 "uuid": "64a2ee60-18c1-4b22-a7b4-9d1ce0f89602", 00:08:56.498 "is_configured": false, 00:08:56.498 "data_offset": 0, 00:08:56.498 "data_size": 65536 00:08:56.498 }, 00:08:56.498 { 00:08:56.498 "name": "BaseBdev3", 00:08:56.498 "uuid": "8f6e6503-c396-4348-8cbf-7efc0cc57bfb", 00:08:56.498 "is_configured": true, 00:08:56.498 "data_offset": 0, 00:08:56.498 "data_size": 65536 00:08:56.498 } 00:08:56.498 ] 00:08:56.498 }' 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.498 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.758 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:56.758 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.758 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.758 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.758 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.018 [2024-12-07 02:42:07.840769] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.018 "name": "Existed_Raid", 00:08:57.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.018 "strip_size_kb": 64, 00:08:57.018 "state": "configuring", 00:08:57.018 "raid_level": "concat", 00:08:57.018 "superblock": false, 00:08:57.018 "num_base_bdevs": 3, 00:08:57.018 "num_base_bdevs_discovered": 1, 00:08:57.018 "num_base_bdevs_operational": 3, 00:08:57.018 "base_bdevs_list": [ 00:08:57.018 { 00:08:57.018 "name": "BaseBdev1", 00:08:57.018 "uuid": "aeeefa9e-0862-4c65-87ff-2fbcaf56c44a", 00:08:57.018 "is_configured": true, 00:08:57.018 "data_offset": 0, 00:08:57.018 "data_size": 65536 00:08:57.018 }, 00:08:57.018 { 00:08:57.018 "name": null, 00:08:57.018 "uuid": "64a2ee60-18c1-4b22-a7b4-9d1ce0f89602", 00:08:57.018 "is_configured": false, 00:08:57.018 "data_offset": 0, 00:08:57.018 "data_size": 65536 00:08:57.018 }, 00:08:57.018 { 00:08:57.018 "name": null, 00:08:57.018 "uuid": "8f6e6503-c396-4348-8cbf-7efc0cc57bfb", 00:08:57.018 "is_configured": false, 00:08:57.018 "data_offset": 0, 00:08:57.018 "data_size": 65536 00:08:57.018 } 00:08:57.018 ] 00:08:57.018 }' 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.018 02:42:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.278 [2024-12-07 02:42:08.308017] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.278 "name": "Existed_Raid", 00:08:57.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.278 "strip_size_kb": 64, 00:08:57.278 "state": "configuring", 00:08:57.278 "raid_level": "concat", 00:08:57.278 "superblock": false, 00:08:57.278 "num_base_bdevs": 3, 00:08:57.278 "num_base_bdevs_discovered": 2, 00:08:57.278 "num_base_bdevs_operational": 3, 00:08:57.278 "base_bdevs_list": [ 00:08:57.278 { 00:08:57.278 "name": "BaseBdev1", 00:08:57.278 "uuid": "aeeefa9e-0862-4c65-87ff-2fbcaf56c44a", 00:08:57.278 "is_configured": true, 00:08:57.278 "data_offset": 0, 00:08:57.278 "data_size": 65536 00:08:57.278 }, 00:08:57.278 { 00:08:57.278 "name": null, 00:08:57.278 "uuid": "64a2ee60-18c1-4b22-a7b4-9d1ce0f89602", 00:08:57.278 "is_configured": false, 00:08:57.278 "data_offset": 0, 00:08:57.278 "data_size": 65536 00:08:57.278 }, 00:08:57.278 { 00:08:57.278 "name": "BaseBdev3", 00:08:57.278 "uuid": "8f6e6503-c396-4348-8cbf-7efc0cc57bfb", 00:08:57.278 "is_configured": true, 00:08:57.278 "data_offset": 0, 00:08:57.278 "data_size": 65536 00:08:57.278 } 00:08:57.278 ] 00:08:57.278 }' 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.278 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.849 [2024-12-07 02:42:08.787207] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.849 "name": "Existed_Raid", 00:08:57.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.849 "strip_size_kb": 64, 00:08:57.849 "state": "configuring", 00:08:57.849 "raid_level": "concat", 00:08:57.849 "superblock": false, 00:08:57.849 "num_base_bdevs": 3, 00:08:57.849 "num_base_bdevs_discovered": 1, 00:08:57.849 "num_base_bdevs_operational": 3, 00:08:57.849 "base_bdevs_list": [ 00:08:57.849 { 00:08:57.849 "name": null, 00:08:57.849 "uuid": "aeeefa9e-0862-4c65-87ff-2fbcaf56c44a", 00:08:57.849 "is_configured": false, 00:08:57.849 "data_offset": 0, 00:08:57.849 "data_size": 65536 00:08:57.849 }, 00:08:57.849 { 00:08:57.849 "name": null, 00:08:57.849 "uuid": "64a2ee60-18c1-4b22-a7b4-9d1ce0f89602", 00:08:57.849 "is_configured": false, 00:08:57.849 "data_offset": 0, 00:08:57.849 "data_size": 65536 00:08:57.849 }, 00:08:57.849 { 00:08:57.849 "name": "BaseBdev3", 00:08:57.849 "uuid": "8f6e6503-c396-4348-8cbf-7efc0cc57bfb", 00:08:57.849 "is_configured": true, 00:08:57.849 "data_offset": 0, 00:08:57.849 "data_size": 65536 00:08:57.849 } 00:08:57.849 ] 00:08:57.849 }' 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.849 02:42:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.419 [2024-12-07 02:42:09.290100] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.419 "name": "Existed_Raid", 00:08:58.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.419 "strip_size_kb": 64, 00:08:58.419 "state": "configuring", 00:08:58.419 "raid_level": "concat", 00:08:58.419 "superblock": false, 00:08:58.419 "num_base_bdevs": 3, 00:08:58.419 "num_base_bdevs_discovered": 2, 00:08:58.419 "num_base_bdevs_operational": 3, 00:08:58.419 "base_bdevs_list": [ 00:08:58.419 { 00:08:58.419 "name": null, 00:08:58.419 "uuid": "aeeefa9e-0862-4c65-87ff-2fbcaf56c44a", 00:08:58.419 "is_configured": false, 00:08:58.419 "data_offset": 0, 00:08:58.419 "data_size": 65536 00:08:58.419 }, 00:08:58.419 { 00:08:58.419 "name": "BaseBdev2", 00:08:58.419 "uuid": "64a2ee60-18c1-4b22-a7b4-9d1ce0f89602", 00:08:58.419 "is_configured": true, 00:08:58.419 "data_offset": 0, 00:08:58.419 "data_size": 65536 00:08:58.419 }, 00:08:58.419 { 00:08:58.419 "name": "BaseBdev3", 00:08:58.419 "uuid": "8f6e6503-c396-4348-8cbf-7efc0cc57bfb", 00:08:58.419 "is_configured": true, 00:08:58.419 "data_offset": 0, 00:08:58.419 "data_size": 65536 00:08:58.419 } 00:08:58.419 ] 00:08:58.419 }' 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.419 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aeeefa9e-0862-4c65-87ff-2fbcaf56c44a 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.989 [2024-12-07 02:42:09.849820] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:58.989 [2024-12-07 02:42:09.849916] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:08:58.989 [2024-12-07 02:42:09.849931] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:58.989 [2024-12-07 02:42:09.850213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:58.989 [2024-12-07 02:42:09.850342] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:08:58.989 [2024-12-07 02:42:09.850351] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:08:58.989 [2024-12-07 02:42:09.850557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.989 NewBaseBdev 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.989 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.990 [ 00:08:58.990 { 00:08:58.990 "name": "NewBaseBdev", 00:08:58.990 "aliases": [ 00:08:58.990 "aeeefa9e-0862-4c65-87ff-2fbcaf56c44a" 00:08:58.990 ], 00:08:58.990 "product_name": "Malloc disk", 00:08:58.990 "block_size": 512, 00:08:58.990 "num_blocks": 65536, 00:08:58.990 "uuid": "aeeefa9e-0862-4c65-87ff-2fbcaf56c44a", 00:08:58.990 "assigned_rate_limits": { 00:08:58.990 "rw_ios_per_sec": 0, 00:08:58.990 "rw_mbytes_per_sec": 0, 00:08:58.990 "r_mbytes_per_sec": 0, 00:08:58.990 "w_mbytes_per_sec": 0 00:08:58.990 }, 00:08:58.990 "claimed": true, 00:08:58.990 "claim_type": "exclusive_write", 00:08:58.990 "zoned": false, 00:08:58.990 "supported_io_types": { 00:08:58.990 "read": true, 00:08:58.990 "write": true, 00:08:58.990 "unmap": true, 00:08:58.990 "flush": true, 00:08:58.990 "reset": true, 00:08:58.990 "nvme_admin": false, 00:08:58.990 "nvme_io": false, 00:08:58.990 "nvme_io_md": false, 00:08:58.990 "write_zeroes": true, 00:08:58.990 "zcopy": true, 00:08:58.990 "get_zone_info": false, 00:08:58.990 "zone_management": false, 00:08:58.990 "zone_append": false, 00:08:58.990 "compare": false, 00:08:58.990 "compare_and_write": false, 00:08:58.990 "abort": true, 00:08:58.990 "seek_hole": false, 00:08:58.990 "seek_data": false, 00:08:58.990 "copy": true, 00:08:58.990 "nvme_iov_md": false 00:08:58.990 }, 00:08:58.990 "memory_domains": [ 00:08:58.990 { 00:08:58.990 "dma_device_id": "system", 00:08:58.990 "dma_device_type": 1 00:08:58.990 }, 00:08:58.990 { 00:08:58.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.990 "dma_device_type": 2 00:08:58.990 } 00:08:58.990 ], 00:08:58.990 "driver_specific": {} 00:08:58.990 } 00:08:58.990 ] 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.990 "name": "Existed_Raid", 00:08:58.990 "uuid": "38cbbdea-9adb-4784-92aa-667d76251970", 00:08:58.990 "strip_size_kb": 64, 00:08:58.990 "state": "online", 00:08:58.990 "raid_level": "concat", 00:08:58.990 "superblock": false, 00:08:58.990 "num_base_bdevs": 3, 00:08:58.990 "num_base_bdevs_discovered": 3, 00:08:58.990 "num_base_bdevs_operational": 3, 00:08:58.990 "base_bdevs_list": [ 00:08:58.990 { 00:08:58.990 "name": "NewBaseBdev", 00:08:58.990 "uuid": "aeeefa9e-0862-4c65-87ff-2fbcaf56c44a", 00:08:58.990 "is_configured": true, 00:08:58.990 "data_offset": 0, 00:08:58.990 "data_size": 65536 00:08:58.990 }, 00:08:58.990 { 00:08:58.990 "name": "BaseBdev2", 00:08:58.990 "uuid": "64a2ee60-18c1-4b22-a7b4-9d1ce0f89602", 00:08:58.990 "is_configured": true, 00:08:58.990 "data_offset": 0, 00:08:58.990 "data_size": 65536 00:08:58.990 }, 00:08:58.990 { 00:08:58.990 "name": "BaseBdev3", 00:08:58.990 "uuid": "8f6e6503-c396-4348-8cbf-7efc0cc57bfb", 00:08:58.990 "is_configured": true, 00:08:58.990 "data_offset": 0, 00:08:58.990 "data_size": 65536 00:08:58.990 } 00:08:58.990 ] 00:08:58.990 }' 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.990 02:42:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.561 [2024-12-07 02:42:10.393208] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.561 "name": "Existed_Raid", 00:08:59.561 "aliases": [ 00:08:59.561 "38cbbdea-9adb-4784-92aa-667d76251970" 00:08:59.561 ], 00:08:59.561 "product_name": "Raid Volume", 00:08:59.561 "block_size": 512, 00:08:59.561 "num_blocks": 196608, 00:08:59.561 "uuid": "38cbbdea-9adb-4784-92aa-667d76251970", 00:08:59.561 "assigned_rate_limits": { 00:08:59.561 "rw_ios_per_sec": 0, 00:08:59.561 "rw_mbytes_per_sec": 0, 00:08:59.561 "r_mbytes_per_sec": 0, 00:08:59.561 "w_mbytes_per_sec": 0 00:08:59.561 }, 00:08:59.561 "claimed": false, 00:08:59.561 "zoned": false, 00:08:59.561 "supported_io_types": { 00:08:59.561 "read": true, 00:08:59.561 "write": true, 00:08:59.561 "unmap": true, 00:08:59.561 "flush": true, 00:08:59.561 "reset": true, 00:08:59.561 "nvme_admin": false, 00:08:59.561 "nvme_io": false, 00:08:59.561 "nvme_io_md": false, 00:08:59.561 "write_zeroes": true, 00:08:59.561 "zcopy": false, 00:08:59.561 "get_zone_info": false, 00:08:59.561 "zone_management": false, 00:08:59.561 "zone_append": false, 00:08:59.561 "compare": false, 00:08:59.561 "compare_and_write": false, 00:08:59.561 "abort": false, 00:08:59.561 "seek_hole": false, 00:08:59.561 "seek_data": false, 00:08:59.561 "copy": false, 00:08:59.561 "nvme_iov_md": false 00:08:59.561 }, 00:08:59.561 "memory_domains": [ 00:08:59.561 { 00:08:59.561 "dma_device_id": "system", 00:08:59.561 "dma_device_type": 1 00:08:59.561 }, 00:08:59.561 { 00:08:59.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.561 "dma_device_type": 2 00:08:59.561 }, 00:08:59.561 { 00:08:59.561 "dma_device_id": "system", 00:08:59.561 "dma_device_type": 1 00:08:59.561 }, 00:08:59.561 { 00:08:59.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.561 "dma_device_type": 2 00:08:59.561 }, 00:08:59.561 { 00:08:59.561 "dma_device_id": "system", 00:08:59.561 "dma_device_type": 1 00:08:59.561 }, 00:08:59.561 { 00:08:59.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.561 "dma_device_type": 2 00:08:59.561 } 00:08:59.561 ], 00:08:59.561 "driver_specific": { 00:08:59.561 "raid": { 00:08:59.561 "uuid": "38cbbdea-9adb-4784-92aa-667d76251970", 00:08:59.561 "strip_size_kb": 64, 00:08:59.561 "state": "online", 00:08:59.561 "raid_level": "concat", 00:08:59.561 "superblock": false, 00:08:59.561 "num_base_bdevs": 3, 00:08:59.561 "num_base_bdevs_discovered": 3, 00:08:59.561 "num_base_bdevs_operational": 3, 00:08:59.561 "base_bdevs_list": [ 00:08:59.561 { 00:08:59.561 "name": "NewBaseBdev", 00:08:59.561 "uuid": "aeeefa9e-0862-4c65-87ff-2fbcaf56c44a", 00:08:59.561 "is_configured": true, 00:08:59.561 "data_offset": 0, 00:08:59.561 "data_size": 65536 00:08:59.561 }, 00:08:59.561 { 00:08:59.561 "name": "BaseBdev2", 00:08:59.561 "uuid": "64a2ee60-18c1-4b22-a7b4-9d1ce0f89602", 00:08:59.561 "is_configured": true, 00:08:59.561 "data_offset": 0, 00:08:59.561 "data_size": 65536 00:08:59.561 }, 00:08:59.561 { 00:08:59.561 "name": "BaseBdev3", 00:08:59.561 "uuid": "8f6e6503-c396-4348-8cbf-7efc0cc57bfb", 00:08:59.561 "is_configured": true, 00:08:59.561 "data_offset": 0, 00:08:59.561 "data_size": 65536 00:08:59.561 } 00:08:59.561 ] 00:08:59.561 } 00:08:59.561 } 00:08:59.561 }' 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:59.561 BaseBdev2 00:08:59.561 BaseBdev3' 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:59.561 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.562 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.562 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.562 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.562 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.562 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.562 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:59.562 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.562 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.562 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.822 [2024-12-07 02:42:10.680418] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.822 [2024-12-07 02:42:10.680449] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.822 [2024-12-07 02:42:10.680522] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.822 [2024-12-07 02:42:10.680594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.822 [2024-12-07 02:42:10.680608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76954 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76954 ']' 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76954 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76954 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76954' 00:08:59.822 killing process with pid 76954 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76954 00:08:59.822 [2024-12-07 02:42:10.721908] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:59.822 02:42:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76954 00:08:59.822 [2024-12-07 02:42:10.779849] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.083 02:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:00.083 00:09:00.083 real 0m9.333s 00:09:00.083 user 0m15.557s 00:09:00.083 sys 0m2.010s 00:09:00.083 02:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.083 ************************************ 00:09:00.083 END TEST raid_state_function_test 00:09:00.083 ************************************ 00:09:00.083 02:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.344 02:42:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:00.344 02:42:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:00.344 02:42:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.344 02:42:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.344 ************************************ 00:09:00.344 START TEST raid_state_function_test_sb 00:09:00.344 ************************************ 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:00.344 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:00.345 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77559 00:09:00.345 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:00.345 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77559' 00:09:00.345 Process raid pid: 77559 00:09:00.345 02:42:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77559 00:09:00.345 02:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77559 ']' 00:09:00.345 02:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.345 02:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.345 02:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.345 02:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.345 02:42:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.345 [2024-12-07 02:42:11.323097] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:00.345 [2024-12-07 02:42:11.323243] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.604 [2024-12-07 02:42:11.472790] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.604 [2024-12-07 02:42:11.542019] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.604 [2024-12-07 02:42:11.617832] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.604 [2024-12-07 02:42:11.617873] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.172 [2024-12-07 02:42:12.157408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.172 [2024-12-07 02:42:12.157469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.172 [2024-12-07 02:42:12.157485] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.172 [2024-12-07 02:42:12.157495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.172 [2024-12-07 02:42:12.157502] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:01.172 [2024-12-07 02:42:12.157515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.172 "name": "Existed_Raid", 00:09:01.172 "uuid": "3ec1939b-8dcb-4b37-bab7-145db061ede0", 00:09:01.172 "strip_size_kb": 64, 00:09:01.172 "state": "configuring", 00:09:01.172 "raid_level": "concat", 00:09:01.172 "superblock": true, 00:09:01.172 "num_base_bdevs": 3, 00:09:01.172 "num_base_bdevs_discovered": 0, 00:09:01.172 "num_base_bdevs_operational": 3, 00:09:01.172 "base_bdevs_list": [ 00:09:01.172 { 00:09:01.172 "name": "BaseBdev1", 00:09:01.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.172 "is_configured": false, 00:09:01.172 "data_offset": 0, 00:09:01.172 "data_size": 0 00:09:01.172 }, 00:09:01.172 { 00:09:01.172 "name": "BaseBdev2", 00:09:01.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.172 "is_configured": false, 00:09:01.172 "data_offset": 0, 00:09:01.172 "data_size": 0 00:09:01.172 }, 00:09:01.172 { 00:09:01.172 "name": "BaseBdev3", 00:09:01.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.172 "is_configured": false, 00:09:01.172 "data_offset": 0, 00:09:01.172 "data_size": 0 00:09:01.172 } 00:09:01.172 ] 00:09:01.172 }' 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.172 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.756 [2024-12-07 02:42:12.628475] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.756 [2024-12-07 02:42:12.628587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.756 [2024-12-07 02:42:12.640488] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.756 [2024-12-07 02:42:12.640567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.756 [2024-12-07 02:42:12.640622] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.756 [2024-12-07 02:42:12.640646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.756 [2024-12-07 02:42:12.640663] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:01.756 [2024-12-07 02:42:12.640687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.756 [2024-12-07 02:42:12.667448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.756 BaseBdev1 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.756 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.756 [ 00:09:01.756 { 00:09:01.756 "name": "BaseBdev1", 00:09:01.756 "aliases": [ 00:09:01.756 "f06831fa-1c36-4303-af1e-922de1f1c5f8" 00:09:01.756 ], 00:09:01.756 "product_name": "Malloc disk", 00:09:01.756 "block_size": 512, 00:09:01.756 "num_blocks": 65536, 00:09:01.756 "uuid": "f06831fa-1c36-4303-af1e-922de1f1c5f8", 00:09:01.756 "assigned_rate_limits": { 00:09:01.756 "rw_ios_per_sec": 0, 00:09:01.756 "rw_mbytes_per_sec": 0, 00:09:01.756 "r_mbytes_per_sec": 0, 00:09:01.756 "w_mbytes_per_sec": 0 00:09:01.756 }, 00:09:01.756 "claimed": true, 00:09:01.756 "claim_type": "exclusive_write", 00:09:01.756 "zoned": false, 00:09:01.756 "supported_io_types": { 00:09:01.756 "read": true, 00:09:01.756 "write": true, 00:09:01.756 "unmap": true, 00:09:01.756 "flush": true, 00:09:01.756 "reset": true, 00:09:01.756 "nvme_admin": false, 00:09:01.756 "nvme_io": false, 00:09:01.756 "nvme_io_md": false, 00:09:01.756 "write_zeroes": true, 00:09:01.756 "zcopy": true, 00:09:01.756 "get_zone_info": false, 00:09:01.756 "zone_management": false, 00:09:01.756 "zone_append": false, 00:09:01.756 "compare": false, 00:09:01.756 "compare_and_write": false, 00:09:01.756 "abort": true, 00:09:01.756 "seek_hole": false, 00:09:01.756 "seek_data": false, 00:09:01.756 "copy": true, 00:09:01.756 "nvme_iov_md": false 00:09:01.756 }, 00:09:01.756 "memory_domains": [ 00:09:01.756 { 00:09:01.756 "dma_device_id": "system", 00:09:01.756 "dma_device_type": 1 00:09:01.756 }, 00:09:01.756 { 00:09:01.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.756 "dma_device_type": 2 00:09:01.756 } 00:09:01.756 ], 00:09:01.756 "driver_specific": {} 00:09:01.756 } 00:09:01.757 ] 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.757 "name": "Existed_Raid", 00:09:01.757 "uuid": "a0e82840-df36-40e1-9006-a90aeb9f178c", 00:09:01.757 "strip_size_kb": 64, 00:09:01.757 "state": "configuring", 00:09:01.757 "raid_level": "concat", 00:09:01.757 "superblock": true, 00:09:01.757 "num_base_bdevs": 3, 00:09:01.757 "num_base_bdevs_discovered": 1, 00:09:01.757 "num_base_bdevs_operational": 3, 00:09:01.757 "base_bdevs_list": [ 00:09:01.757 { 00:09:01.757 "name": "BaseBdev1", 00:09:01.757 "uuid": "f06831fa-1c36-4303-af1e-922de1f1c5f8", 00:09:01.757 "is_configured": true, 00:09:01.757 "data_offset": 2048, 00:09:01.757 "data_size": 63488 00:09:01.757 }, 00:09:01.757 { 00:09:01.757 "name": "BaseBdev2", 00:09:01.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.757 "is_configured": false, 00:09:01.757 "data_offset": 0, 00:09:01.757 "data_size": 0 00:09:01.757 }, 00:09:01.757 { 00:09:01.757 "name": "BaseBdev3", 00:09:01.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.757 "is_configured": false, 00:09:01.757 "data_offset": 0, 00:09:01.757 "data_size": 0 00:09:01.757 } 00:09:01.757 ] 00:09:01.757 }' 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.757 02:42:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.326 [2024-12-07 02:42:13.194549] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.326 [2024-12-07 02:42:13.194666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.326 [2024-12-07 02:42:13.206595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.326 [2024-12-07 02:42:13.208803] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.326 [2024-12-07 02:42:13.208875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.326 [2024-12-07 02:42:13.208903] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:02.326 [2024-12-07 02:42:13.208925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.326 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.327 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.327 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.327 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.327 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.327 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.327 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.327 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.327 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.327 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.327 "name": "Existed_Raid", 00:09:02.327 "uuid": "523397e3-8ee6-4726-8e37-8d57c96ee65d", 00:09:02.327 "strip_size_kb": 64, 00:09:02.327 "state": "configuring", 00:09:02.327 "raid_level": "concat", 00:09:02.327 "superblock": true, 00:09:02.327 "num_base_bdevs": 3, 00:09:02.327 "num_base_bdevs_discovered": 1, 00:09:02.327 "num_base_bdevs_operational": 3, 00:09:02.327 "base_bdevs_list": [ 00:09:02.327 { 00:09:02.327 "name": "BaseBdev1", 00:09:02.327 "uuid": "f06831fa-1c36-4303-af1e-922de1f1c5f8", 00:09:02.327 "is_configured": true, 00:09:02.327 "data_offset": 2048, 00:09:02.327 "data_size": 63488 00:09:02.327 }, 00:09:02.327 { 00:09:02.327 "name": "BaseBdev2", 00:09:02.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.327 "is_configured": false, 00:09:02.327 "data_offset": 0, 00:09:02.327 "data_size": 0 00:09:02.327 }, 00:09:02.327 { 00:09:02.327 "name": "BaseBdev3", 00:09:02.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.327 "is_configured": false, 00:09:02.327 "data_offset": 0, 00:09:02.327 "data_size": 0 00:09:02.327 } 00:09:02.327 ] 00:09:02.327 }' 00:09:02.327 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.327 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.586 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:02.586 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.586 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.846 [2024-12-07 02:42:13.682620] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.846 BaseBdev2 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.846 [ 00:09:02.846 { 00:09:02.846 "name": "BaseBdev2", 00:09:02.846 "aliases": [ 00:09:02.846 "1a2072b7-9d59-4ffd-9f50-5fb79033d4f8" 00:09:02.846 ], 00:09:02.846 "product_name": "Malloc disk", 00:09:02.846 "block_size": 512, 00:09:02.846 "num_blocks": 65536, 00:09:02.846 "uuid": "1a2072b7-9d59-4ffd-9f50-5fb79033d4f8", 00:09:02.846 "assigned_rate_limits": { 00:09:02.846 "rw_ios_per_sec": 0, 00:09:02.846 "rw_mbytes_per_sec": 0, 00:09:02.846 "r_mbytes_per_sec": 0, 00:09:02.846 "w_mbytes_per_sec": 0 00:09:02.846 }, 00:09:02.846 "claimed": true, 00:09:02.846 "claim_type": "exclusive_write", 00:09:02.846 "zoned": false, 00:09:02.846 "supported_io_types": { 00:09:02.846 "read": true, 00:09:02.846 "write": true, 00:09:02.846 "unmap": true, 00:09:02.846 "flush": true, 00:09:02.846 "reset": true, 00:09:02.846 "nvme_admin": false, 00:09:02.846 "nvme_io": false, 00:09:02.846 "nvme_io_md": false, 00:09:02.846 "write_zeroes": true, 00:09:02.846 "zcopy": true, 00:09:02.846 "get_zone_info": false, 00:09:02.846 "zone_management": false, 00:09:02.846 "zone_append": false, 00:09:02.846 "compare": false, 00:09:02.846 "compare_and_write": false, 00:09:02.846 "abort": true, 00:09:02.846 "seek_hole": false, 00:09:02.846 "seek_data": false, 00:09:02.846 "copy": true, 00:09:02.846 "nvme_iov_md": false 00:09:02.846 }, 00:09:02.846 "memory_domains": [ 00:09:02.846 { 00:09:02.846 "dma_device_id": "system", 00:09:02.846 "dma_device_type": 1 00:09:02.846 }, 00:09:02.846 { 00:09:02.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.846 "dma_device_type": 2 00:09:02.846 } 00:09:02.846 ], 00:09:02.846 "driver_specific": {} 00:09:02.846 } 00:09:02.846 ] 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.846 "name": "Existed_Raid", 00:09:02.846 "uuid": "523397e3-8ee6-4726-8e37-8d57c96ee65d", 00:09:02.846 "strip_size_kb": 64, 00:09:02.846 "state": "configuring", 00:09:02.846 "raid_level": "concat", 00:09:02.846 "superblock": true, 00:09:02.846 "num_base_bdevs": 3, 00:09:02.846 "num_base_bdevs_discovered": 2, 00:09:02.846 "num_base_bdevs_operational": 3, 00:09:02.846 "base_bdevs_list": [ 00:09:02.846 { 00:09:02.846 "name": "BaseBdev1", 00:09:02.846 "uuid": "f06831fa-1c36-4303-af1e-922de1f1c5f8", 00:09:02.846 "is_configured": true, 00:09:02.846 "data_offset": 2048, 00:09:02.846 "data_size": 63488 00:09:02.846 }, 00:09:02.846 { 00:09:02.846 "name": "BaseBdev2", 00:09:02.846 "uuid": "1a2072b7-9d59-4ffd-9f50-5fb79033d4f8", 00:09:02.846 "is_configured": true, 00:09:02.846 "data_offset": 2048, 00:09:02.846 "data_size": 63488 00:09:02.846 }, 00:09:02.846 { 00:09:02.846 "name": "BaseBdev3", 00:09:02.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.846 "is_configured": false, 00:09:02.846 "data_offset": 0, 00:09:02.846 "data_size": 0 00:09:02.846 } 00:09:02.846 ] 00:09:02.846 }' 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.846 02:42:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.105 [2024-12-07 02:42:14.099266] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.105 [2024-12-07 02:42:14.099618] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:03.105 [2024-12-07 02:42:14.099646] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:03.105 [2024-12-07 02:42:14.099987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:03.105 [2024-12-07 02:42:14.100110] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:03.105 [2024-12-07 02:42:14.100119] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:03.105 BaseBdev3 00:09:03.105 [2024-12-07 02:42:14.100246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.105 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.105 [ 00:09:03.106 { 00:09:03.106 "name": "BaseBdev3", 00:09:03.106 "aliases": [ 00:09:03.106 "348a0371-cee4-4876-ac62-79349d36ecf0" 00:09:03.106 ], 00:09:03.106 "product_name": "Malloc disk", 00:09:03.106 "block_size": 512, 00:09:03.106 "num_blocks": 65536, 00:09:03.106 "uuid": "348a0371-cee4-4876-ac62-79349d36ecf0", 00:09:03.106 "assigned_rate_limits": { 00:09:03.106 "rw_ios_per_sec": 0, 00:09:03.106 "rw_mbytes_per_sec": 0, 00:09:03.106 "r_mbytes_per_sec": 0, 00:09:03.106 "w_mbytes_per_sec": 0 00:09:03.106 }, 00:09:03.106 "claimed": true, 00:09:03.106 "claim_type": "exclusive_write", 00:09:03.106 "zoned": false, 00:09:03.106 "supported_io_types": { 00:09:03.106 "read": true, 00:09:03.106 "write": true, 00:09:03.106 "unmap": true, 00:09:03.106 "flush": true, 00:09:03.106 "reset": true, 00:09:03.106 "nvme_admin": false, 00:09:03.106 "nvme_io": false, 00:09:03.106 "nvme_io_md": false, 00:09:03.106 "write_zeroes": true, 00:09:03.106 "zcopy": true, 00:09:03.106 "get_zone_info": false, 00:09:03.106 "zone_management": false, 00:09:03.106 "zone_append": false, 00:09:03.106 "compare": false, 00:09:03.106 "compare_and_write": false, 00:09:03.106 "abort": true, 00:09:03.106 "seek_hole": false, 00:09:03.106 "seek_data": false, 00:09:03.106 "copy": true, 00:09:03.106 "nvme_iov_md": false 00:09:03.106 }, 00:09:03.106 "memory_domains": [ 00:09:03.106 { 00:09:03.106 "dma_device_id": "system", 00:09:03.106 "dma_device_type": 1 00:09:03.106 }, 00:09:03.106 { 00:09:03.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.106 "dma_device_type": 2 00:09:03.106 } 00:09:03.106 ], 00:09:03.106 "driver_specific": {} 00:09:03.106 } 00:09:03.106 ] 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.106 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.365 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.365 "name": "Existed_Raid", 00:09:03.365 "uuid": "523397e3-8ee6-4726-8e37-8d57c96ee65d", 00:09:03.365 "strip_size_kb": 64, 00:09:03.365 "state": "online", 00:09:03.365 "raid_level": "concat", 00:09:03.365 "superblock": true, 00:09:03.365 "num_base_bdevs": 3, 00:09:03.365 "num_base_bdevs_discovered": 3, 00:09:03.365 "num_base_bdevs_operational": 3, 00:09:03.365 "base_bdevs_list": [ 00:09:03.365 { 00:09:03.365 "name": "BaseBdev1", 00:09:03.365 "uuid": "f06831fa-1c36-4303-af1e-922de1f1c5f8", 00:09:03.365 "is_configured": true, 00:09:03.365 "data_offset": 2048, 00:09:03.365 "data_size": 63488 00:09:03.365 }, 00:09:03.365 { 00:09:03.365 "name": "BaseBdev2", 00:09:03.365 "uuid": "1a2072b7-9d59-4ffd-9f50-5fb79033d4f8", 00:09:03.365 "is_configured": true, 00:09:03.365 "data_offset": 2048, 00:09:03.365 "data_size": 63488 00:09:03.365 }, 00:09:03.365 { 00:09:03.365 "name": "BaseBdev3", 00:09:03.366 "uuid": "348a0371-cee4-4876-ac62-79349d36ecf0", 00:09:03.366 "is_configured": true, 00:09:03.366 "data_offset": 2048, 00:09:03.366 "data_size": 63488 00:09:03.366 } 00:09:03.366 ] 00:09:03.366 }' 00:09:03.366 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.366 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.626 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:03.626 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:03.626 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:03.626 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:03.626 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:03.626 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:03.626 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:03.626 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.626 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.626 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:03.626 [2024-12-07 02:42:14.598747] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.626 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.626 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:03.626 "name": "Existed_Raid", 00:09:03.626 "aliases": [ 00:09:03.626 "523397e3-8ee6-4726-8e37-8d57c96ee65d" 00:09:03.626 ], 00:09:03.626 "product_name": "Raid Volume", 00:09:03.626 "block_size": 512, 00:09:03.626 "num_blocks": 190464, 00:09:03.626 "uuid": "523397e3-8ee6-4726-8e37-8d57c96ee65d", 00:09:03.626 "assigned_rate_limits": { 00:09:03.626 "rw_ios_per_sec": 0, 00:09:03.626 "rw_mbytes_per_sec": 0, 00:09:03.626 "r_mbytes_per_sec": 0, 00:09:03.626 "w_mbytes_per_sec": 0 00:09:03.626 }, 00:09:03.626 "claimed": false, 00:09:03.626 "zoned": false, 00:09:03.626 "supported_io_types": { 00:09:03.626 "read": true, 00:09:03.626 "write": true, 00:09:03.626 "unmap": true, 00:09:03.626 "flush": true, 00:09:03.626 "reset": true, 00:09:03.626 "nvme_admin": false, 00:09:03.626 "nvme_io": false, 00:09:03.626 "nvme_io_md": false, 00:09:03.626 "write_zeroes": true, 00:09:03.626 "zcopy": false, 00:09:03.626 "get_zone_info": false, 00:09:03.626 "zone_management": false, 00:09:03.626 "zone_append": false, 00:09:03.626 "compare": false, 00:09:03.626 "compare_and_write": false, 00:09:03.626 "abort": false, 00:09:03.626 "seek_hole": false, 00:09:03.626 "seek_data": false, 00:09:03.626 "copy": false, 00:09:03.626 "nvme_iov_md": false 00:09:03.626 }, 00:09:03.626 "memory_domains": [ 00:09:03.626 { 00:09:03.626 "dma_device_id": "system", 00:09:03.626 "dma_device_type": 1 00:09:03.626 }, 00:09:03.626 { 00:09:03.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.626 "dma_device_type": 2 00:09:03.626 }, 00:09:03.626 { 00:09:03.626 "dma_device_id": "system", 00:09:03.626 "dma_device_type": 1 00:09:03.626 }, 00:09:03.626 { 00:09:03.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.626 "dma_device_type": 2 00:09:03.626 }, 00:09:03.626 { 00:09:03.626 "dma_device_id": "system", 00:09:03.626 "dma_device_type": 1 00:09:03.626 }, 00:09:03.626 { 00:09:03.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.626 "dma_device_type": 2 00:09:03.626 } 00:09:03.626 ], 00:09:03.626 "driver_specific": { 00:09:03.626 "raid": { 00:09:03.626 "uuid": "523397e3-8ee6-4726-8e37-8d57c96ee65d", 00:09:03.626 "strip_size_kb": 64, 00:09:03.626 "state": "online", 00:09:03.626 "raid_level": "concat", 00:09:03.626 "superblock": true, 00:09:03.626 "num_base_bdevs": 3, 00:09:03.626 "num_base_bdevs_discovered": 3, 00:09:03.626 "num_base_bdevs_operational": 3, 00:09:03.626 "base_bdevs_list": [ 00:09:03.626 { 00:09:03.626 "name": "BaseBdev1", 00:09:03.626 "uuid": "f06831fa-1c36-4303-af1e-922de1f1c5f8", 00:09:03.626 "is_configured": true, 00:09:03.626 "data_offset": 2048, 00:09:03.626 "data_size": 63488 00:09:03.626 }, 00:09:03.626 { 00:09:03.626 "name": "BaseBdev2", 00:09:03.626 "uuid": "1a2072b7-9d59-4ffd-9f50-5fb79033d4f8", 00:09:03.626 "is_configured": true, 00:09:03.626 "data_offset": 2048, 00:09:03.626 "data_size": 63488 00:09:03.626 }, 00:09:03.626 { 00:09:03.626 "name": "BaseBdev3", 00:09:03.626 "uuid": "348a0371-cee4-4876-ac62-79349d36ecf0", 00:09:03.626 "is_configured": true, 00:09:03.626 "data_offset": 2048, 00:09:03.626 "data_size": 63488 00:09:03.626 } 00:09:03.626 ] 00:09:03.626 } 00:09:03.626 } 00:09:03.626 }' 00:09:03.626 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.627 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:03.627 BaseBdev2 00:09:03.627 BaseBdev3' 00:09:03.627 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.887 [2024-12-07 02:42:14.878042] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.887 [2024-12-07 02:42:14.878074] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:03.887 [2024-12-07 02:42:14.878133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.887 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.888 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.888 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.888 "name": "Existed_Raid", 00:09:03.888 "uuid": "523397e3-8ee6-4726-8e37-8d57c96ee65d", 00:09:03.888 "strip_size_kb": 64, 00:09:03.888 "state": "offline", 00:09:03.888 "raid_level": "concat", 00:09:03.888 "superblock": true, 00:09:03.888 "num_base_bdevs": 3, 00:09:03.888 "num_base_bdevs_discovered": 2, 00:09:03.888 "num_base_bdevs_operational": 2, 00:09:03.888 "base_bdevs_list": [ 00:09:03.888 { 00:09:03.888 "name": null, 00:09:03.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.888 "is_configured": false, 00:09:03.888 "data_offset": 0, 00:09:03.888 "data_size": 63488 00:09:03.888 }, 00:09:03.888 { 00:09:03.888 "name": "BaseBdev2", 00:09:03.888 "uuid": "1a2072b7-9d59-4ffd-9f50-5fb79033d4f8", 00:09:03.888 "is_configured": true, 00:09:03.888 "data_offset": 2048, 00:09:03.888 "data_size": 63488 00:09:03.888 }, 00:09:03.888 { 00:09:03.888 "name": "BaseBdev3", 00:09:03.888 "uuid": "348a0371-cee4-4876-ac62-79349d36ecf0", 00:09:03.888 "is_configured": true, 00:09:03.888 "data_offset": 2048, 00:09:03.888 "data_size": 63488 00:09:03.888 } 00:09:03.888 ] 00:09:03.888 }' 00:09:03.888 02:42:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.888 02:42:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.455 [2024-12-07 02:42:15.381828] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.455 [2024-12-07 02:42:15.462042] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:04.455 [2024-12-07 02:42:15.462095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:04.455 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.456 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:04.456 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:04.456 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.456 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:04.456 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.456 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.456 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.716 BaseBdev2 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.716 [ 00:09:04.716 { 00:09:04.716 "name": "BaseBdev2", 00:09:04.716 "aliases": [ 00:09:04.716 "e6d2fdf4-b8a5-4c37-aafc-c7ca48566ad0" 00:09:04.716 ], 00:09:04.716 "product_name": "Malloc disk", 00:09:04.716 "block_size": 512, 00:09:04.716 "num_blocks": 65536, 00:09:04.716 "uuid": "e6d2fdf4-b8a5-4c37-aafc-c7ca48566ad0", 00:09:04.716 "assigned_rate_limits": { 00:09:04.716 "rw_ios_per_sec": 0, 00:09:04.716 "rw_mbytes_per_sec": 0, 00:09:04.716 "r_mbytes_per_sec": 0, 00:09:04.716 "w_mbytes_per_sec": 0 00:09:04.716 }, 00:09:04.716 "claimed": false, 00:09:04.716 "zoned": false, 00:09:04.716 "supported_io_types": { 00:09:04.716 "read": true, 00:09:04.716 "write": true, 00:09:04.716 "unmap": true, 00:09:04.716 "flush": true, 00:09:04.716 "reset": true, 00:09:04.716 "nvme_admin": false, 00:09:04.716 "nvme_io": false, 00:09:04.716 "nvme_io_md": false, 00:09:04.716 "write_zeroes": true, 00:09:04.716 "zcopy": true, 00:09:04.716 "get_zone_info": false, 00:09:04.716 "zone_management": false, 00:09:04.716 "zone_append": false, 00:09:04.716 "compare": false, 00:09:04.716 "compare_and_write": false, 00:09:04.716 "abort": true, 00:09:04.716 "seek_hole": false, 00:09:04.716 "seek_data": false, 00:09:04.716 "copy": true, 00:09:04.716 "nvme_iov_md": false 00:09:04.716 }, 00:09:04.716 "memory_domains": [ 00:09:04.716 { 00:09:04.716 "dma_device_id": "system", 00:09:04.716 "dma_device_type": 1 00:09:04.716 }, 00:09:04.716 { 00:09:04.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.716 "dma_device_type": 2 00:09:04.716 } 00:09:04.716 ], 00:09:04.716 "driver_specific": {} 00:09:04.716 } 00:09:04.716 ] 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.716 BaseBdev3 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.716 [ 00:09:04.716 { 00:09:04.716 "name": "BaseBdev3", 00:09:04.716 "aliases": [ 00:09:04.716 "de662955-96f9-44fe-9e4f-c7aca1da96d0" 00:09:04.716 ], 00:09:04.716 "product_name": "Malloc disk", 00:09:04.716 "block_size": 512, 00:09:04.716 "num_blocks": 65536, 00:09:04.716 "uuid": "de662955-96f9-44fe-9e4f-c7aca1da96d0", 00:09:04.716 "assigned_rate_limits": { 00:09:04.716 "rw_ios_per_sec": 0, 00:09:04.716 "rw_mbytes_per_sec": 0, 00:09:04.716 "r_mbytes_per_sec": 0, 00:09:04.716 "w_mbytes_per_sec": 0 00:09:04.716 }, 00:09:04.716 "claimed": false, 00:09:04.716 "zoned": false, 00:09:04.716 "supported_io_types": { 00:09:04.716 "read": true, 00:09:04.716 "write": true, 00:09:04.716 "unmap": true, 00:09:04.716 "flush": true, 00:09:04.716 "reset": true, 00:09:04.716 "nvme_admin": false, 00:09:04.716 "nvme_io": false, 00:09:04.716 "nvme_io_md": false, 00:09:04.716 "write_zeroes": true, 00:09:04.716 "zcopy": true, 00:09:04.716 "get_zone_info": false, 00:09:04.716 "zone_management": false, 00:09:04.716 "zone_append": false, 00:09:04.716 "compare": false, 00:09:04.716 "compare_and_write": false, 00:09:04.716 "abort": true, 00:09:04.716 "seek_hole": false, 00:09:04.716 "seek_data": false, 00:09:04.716 "copy": true, 00:09:04.716 "nvme_iov_md": false 00:09:04.716 }, 00:09:04.716 "memory_domains": [ 00:09:04.716 { 00:09:04.716 "dma_device_id": "system", 00:09:04.716 "dma_device_type": 1 00:09:04.716 }, 00:09:04.716 { 00:09:04.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.716 "dma_device_type": 2 00:09:04.716 } 00:09:04.716 ], 00:09:04.716 "driver_specific": {} 00:09:04.716 } 00:09:04.716 ] 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.716 [2024-12-07 02:42:15.656505] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.716 [2024-12-07 02:42:15.656552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.716 [2024-12-07 02:42:15.656573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.716 [2024-12-07 02:42:15.658635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.716 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.717 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.717 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.717 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.717 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.717 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.717 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.717 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.717 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.717 "name": "Existed_Raid", 00:09:04.717 "uuid": "68291e43-7faf-4387-91d4-076783350430", 00:09:04.717 "strip_size_kb": 64, 00:09:04.717 "state": "configuring", 00:09:04.717 "raid_level": "concat", 00:09:04.717 "superblock": true, 00:09:04.717 "num_base_bdevs": 3, 00:09:04.717 "num_base_bdevs_discovered": 2, 00:09:04.717 "num_base_bdevs_operational": 3, 00:09:04.717 "base_bdevs_list": [ 00:09:04.717 { 00:09:04.717 "name": "BaseBdev1", 00:09:04.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.717 "is_configured": false, 00:09:04.717 "data_offset": 0, 00:09:04.717 "data_size": 0 00:09:04.717 }, 00:09:04.717 { 00:09:04.717 "name": "BaseBdev2", 00:09:04.717 "uuid": "e6d2fdf4-b8a5-4c37-aafc-c7ca48566ad0", 00:09:04.717 "is_configured": true, 00:09:04.717 "data_offset": 2048, 00:09:04.717 "data_size": 63488 00:09:04.717 }, 00:09:04.717 { 00:09:04.717 "name": "BaseBdev3", 00:09:04.717 "uuid": "de662955-96f9-44fe-9e4f-c7aca1da96d0", 00:09:04.717 "is_configured": true, 00:09:04.717 "data_offset": 2048, 00:09:04.717 "data_size": 63488 00:09:04.717 } 00:09:04.717 ] 00:09:04.717 }' 00:09:04.717 02:42:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.717 02:42:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.286 [2024-12-07 02:42:16.103728] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.286 "name": "Existed_Raid", 00:09:05.286 "uuid": "68291e43-7faf-4387-91d4-076783350430", 00:09:05.286 "strip_size_kb": 64, 00:09:05.286 "state": "configuring", 00:09:05.286 "raid_level": "concat", 00:09:05.286 "superblock": true, 00:09:05.286 "num_base_bdevs": 3, 00:09:05.286 "num_base_bdevs_discovered": 1, 00:09:05.286 "num_base_bdevs_operational": 3, 00:09:05.286 "base_bdevs_list": [ 00:09:05.286 { 00:09:05.286 "name": "BaseBdev1", 00:09:05.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.286 "is_configured": false, 00:09:05.286 "data_offset": 0, 00:09:05.286 "data_size": 0 00:09:05.286 }, 00:09:05.286 { 00:09:05.286 "name": null, 00:09:05.286 "uuid": "e6d2fdf4-b8a5-4c37-aafc-c7ca48566ad0", 00:09:05.286 "is_configured": false, 00:09:05.286 "data_offset": 0, 00:09:05.286 "data_size": 63488 00:09:05.286 }, 00:09:05.286 { 00:09:05.286 "name": "BaseBdev3", 00:09:05.286 "uuid": "de662955-96f9-44fe-9e4f-c7aca1da96d0", 00:09:05.286 "is_configured": true, 00:09:05.286 "data_offset": 2048, 00:09:05.286 "data_size": 63488 00:09:05.286 } 00:09:05.286 ] 00:09:05.286 }' 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.286 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.545 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:05.545 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.545 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.545 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.545 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.545 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.546 [2024-12-07 02:42:16.531855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.546 BaseBdev1 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.546 [ 00:09:05.546 { 00:09:05.546 "name": "BaseBdev1", 00:09:05.546 "aliases": [ 00:09:05.546 "35d0324c-d33d-48e7-9daa-7a9dd1082f61" 00:09:05.546 ], 00:09:05.546 "product_name": "Malloc disk", 00:09:05.546 "block_size": 512, 00:09:05.546 "num_blocks": 65536, 00:09:05.546 "uuid": "35d0324c-d33d-48e7-9daa-7a9dd1082f61", 00:09:05.546 "assigned_rate_limits": { 00:09:05.546 "rw_ios_per_sec": 0, 00:09:05.546 "rw_mbytes_per_sec": 0, 00:09:05.546 "r_mbytes_per_sec": 0, 00:09:05.546 "w_mbytes_per_sec": 0 00:09:05.546 }, 00:09:05.546 "claimed": true, 00:09:05.546 "claim_type": "exclusive_write", 00:09:05.546 "zoned": false, 00:09:05.546 "supported_io_types": { 00:09:05.546 "read": true, 00:09:05.546 "write": true, 00:09:05.546 "unmap": true, 00:09:05.546 "flush": true, 00:09:05.546 "reset": true, 00:09:05.546 "nvme_admin": false, 00:09:05.546 "nvme_io": false, 00:09:05.546 "nvme_io_md": false, 00:09:05.546 "write_zeroes": true, 00:09:05.546 "zcopy": true, 00:09:05.546 "get_zone_info": false, 00:09:05.546 "zone_management": false, 00:09:05.546 "zone_append": false, 00:09:05.546 "compare": false, 00:09:05.546 "compare_and_write": false, 00:09:05.546 "abort": true, 00:09:05.546 "seek_hole": false, 00:09:05.546 "seek_data": false, 00:09:05.546 "copy": true, 00:09:05.546 "nvme_iov_md": false 00:09:05.546 }, 00:09:05.546 "memory_domains": [ 00:09:05.546 { 00:09:05.546 "dma_device_id": "system", 00:09:05.546 "dma_device_type": 1 00:09:05.546 }, 00:09:05.546 { 00:09:05.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.546 "dma_device_type": 2 00:09:05.546 } 00:09:05.546 ], 00:09:05.546 "driver_specific": {} 00:09:05.546 } 00:09:05.546 ] 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.546 "name": "Existed_Raid", 00:09:05.546 "uuid": "68291e43-7faf-4387-91d4-076783350430", 00:09:05.546 "strip_size_kb": 64, 00:09:05.546 "state": "configuring", 00:09:05.546 "raid_level": "concat", 00:09:05.546 "superblock": true, 00:09:05.546 "num_base_bdevs": 3, 00:09:05.546 "num_base_bdevs_discovered": 2, 00:09:05.546 "num_base_bdevs_operational": 3, 00:09:05.546 "base_bdevs_list": [ 00:09:05.546 { 00:09:05.546 "name": "BaseBdev1", 00:09:05.546 "uuid": "35d0324c-d33d-48e7-9daa-7a9dd1082f61", 00:09:05.546 "is_configured": true, 00:09:05.546 "data_offset": 2048, 00:09:05.546 "data_size": 63488 00:09:05.546 }, 00:09:05.546 { 00:09:05.546 "name": null, 00:09:05.546 "uuid": "e6d2fdf4-b8a5-4c37-aafc-c7ca48566ad0", 00:09:05.546 "is_configured": false, 00:09:05.546 "data_offset": 0, 00:09:05.546 "data_size": 63488 00:09:05.546 }, 00:09:05.546 { 00:09:05.546 "name": "BaseBdev3", 00:09:05.546 "uuid": "de662955-96f9-44fe-9e4f-c7aca1da96d0", 00:09:05.546 "is_configured": true, 00:09:05.546 "data_offset": 2048, 00:09:05.546 "data_size": 63488 00:09:05.546 } 00:09:05.546 ] 00:09:05.546 }' 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.546 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.117 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.117 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.117 02:42:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.117 02:42:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.117 [2024-12-07 02:42:17.051029] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.117 "name": "Existed_Raid", 00:09:06.117 "uuid": "68291e43-7faf-4387-91d4-076783350430", 00:09:06.117 "strip_size_kb": 64, 00:09:06.117 "state": "configuring", 00:09:06.117 "raid_level": "concat", 00:09:06.117 "superblock": true, 00:09:06.117 "num_base_bdevs": 3, 00:09:06.117 "num_base_bdevs_discovered": 1, 00:09:06.117 "num_base_bdevs_operational": 3, 00:09:06.117 "base_bdevs_list": [ 00:09:06.117 { 00:09:06.117 "name": "BaseBdev1", 00:09:06.117 "uuid": "35d0324c-d33d-48e7-9daa-7a9dd1082f61", 00:09:06.117 "is_configured": true, 00:09:06.117 "data_offset": 2048, 00:09:06.117 "data_size": 63488 00:09:06.117 }, 00:09:06.117 { 00:09:06.117 "name": null, 00:09:06.117 "uuid": "e6d2fdf4-b8a5-4c37-aafc-c7ca48566ad0", 00:09:06.117 "is_configured": false, 00:09:06.117 "data_offset": 0, 00:09:06.117 "data_size": 63488 00:09:06.117 }, 00:09:06.117 { 00:09:06.117 "name": null, 00:09:06.117 "uuid": "de662955-96f9-44fe-9e4f-c7aca1da96d0", 00:09:06.117 "is_configured": false, 00:09:06.117 "data_offset": 0, 00:09:06.117 "data_size": 63488 00:09:06.117 } 00:09:06.117 ] 00:09:06.117 }' 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.117 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.734 [2024-12-07 02:42:17.542296] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.734 "name": "Existed_Raid", 00:09:06.734 "uuid": "68291e43-7faf-4387-91d4-076783350430", 00:09:06.734 "strip_size_kb": 64, 00:09:06.734 "state": "configuring", 00:09:06.734 "raid_level": "concat", 00:09:06.734 "superblock": true, 00:09:06.734 "num_base_bdevs": 3, 00:09:06.734 "num_base_bdevs_discovered": 2, 00:09:06.734 "num_base_bdevs_operational": 3, 00:09:06.734 "base_bdevs_list": [ 00:09:06.734 { 00:09:06.734 "name": "BaseBdev1", 00:09:06.734 "uuid": "35d0324c-d33d-48e7-9daa-7a9dd1082f61", 00:09:06.734 "is_configured": true, 00:09:06.734 "data_offset": 2048, 00:09:06.734 "data_size": 63488 00:09:06.734 }, 00:09:06.734 { 00:09:06.734 "name": null, 00:09:06.734 "uuid": "e6d2fdf4-b8a5-4c37-aafc-c7ca48566ad0", 00:09:06.734 "is_configured": false, 00:09:06.734 "data_offset": 0, 00:09:06.734 "data_size": 63488 00:09:06.734 }, 00:09:06.734 { 00:09:06.734 "name": "BaseBdev3", 00:09:06.734 "uuid": "de662955-96f9-44fe-9e4f-c7aca1da96d0", 00:09:06.734 "is_configured": true, 00:09:06.734 "data_offset": 2048, 00:09:06.734 "data_size": 63488 00:09:06.734 } 00:09:06.734 ] 00:09:06.734 }' 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.734 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.994 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:06.994 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.994 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.994 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.994 02:42:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.994 02:42:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.994 [2024-12-07 02:42:18.005484] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.994 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.253 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.253 "name": "Existed_Raid", 00:09:07.253 "uuid": "68291e43-7faf-4387-91d4-076783350430", 00:09:07.253 "strip_size_kb": 64, 00:09:07.253 "state": "configuring", 00:09:07.253 "raid_level": "concat", 00:09:07.253 "superblock": true, 00:09:07.253 "num_base_bdevs": 3, 00:09:07.253 "num_base_bdevs_discovered": 1, 00:09:07.253 "num_base_bdevs_operational": 3, 00:09:07.253 "base_bdevs_list": [ 00:09:07.253 { 00:09:07.253 "name": null, 00:09:07.253 "uuid": "35d0324c-d33d-48e7-9daa-7a9dd1082f61", 00:09:07.253 "is_configured": false, 00:09:07.253 "data_offset": 0, 00:09:07.253 "data_size": 63488 00:09:07.253 }, 00:09:07.253 { 00:09:07.253 "name": null, 00:09:07.253 "uuid": "e6d2fdf4-b8a5-4c37-aafc-c7ca48566ad0", 00:09:07.253 "is_configured": false, 00:09:07.253 "data_offset": 0, 00:09:07.253 "data_size": 63488 00:09:07.253 }, 00:09:07.253 { 00:09:07.253 "name": "BaseBdev3", 00:09:07.253 "uuid": "de662955-96f9-44fe-9e4f-c7aca1da96d0", 00:09:07.253 "is_configured": true, 00:09:07.253 "data_offset": 2048, 00:09:07.253 "data_size": 63488 00:09:07.253 } 00:09:07.253 ] 00:09:07.253 }' 00:09:07.254 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.254 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.514 [2024-12-07 02:42:18.488560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.514 "name": "Existed_Raid", 00:09:07.514 "uuid": "68291e43-7faf-4387-91d4-076783350430", 00:09:07.514 "strip_size_kb": 64, 00:09:07.514 "state": "configuring", 00:09:07.514 "raid_level": "concat", 00:09:07.514 "superblock": true, 00:09:07.514 "num_base_bdevs": 3, 00:09:07.514 "num_base_bdevs_discovered": 2, 00:09:07.514 "num_base_bdevs_operational": 3, 00:09:07.514 "base_bdevs_list": [ 00:09:07.514 { 00:09:07.514 "name": null, 00:09:07.514 "uuid": "35d0324c-d33d-48e7-9daa-7a9dd1082f61", 00:09:07.514 "is_configured": false, 00:09:07.514 "data_offset": 0, 00:09:07.514 "data_size": 63488 00:09:07.514 }, 00:09:07.514 { 00:09:07.514 "name": "BaseBdev2", 00:09:07.514 "uuid": "e6d2fdf4-b8a5-4c37-aafc-c7ca48566ad0", 00:09:07.514 "is_configured": true, 00:09:07.514 "data_offset": 2048, 00:09:07.514 "data_size": 63488 00:09:07.514 }, 00:09:07.514 { 00:09:07.514 "name": "BaseBdev3", 00:09:07.514 "uuid": "de662955-96f9-44fe-9e4f-c7aca1da96d0", 00:09:07.514 "is_configured": true, 00:09:07.514 "data_offset": 2048, 00:09:07.514 "data_size": 63488 00:09:07.514 } 00:09:07.514 ] 00:09:07.514 }' 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.514 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.084 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.084 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.084 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.084 02:42:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:08.084 02:42:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 35d0324c-d33d-48e7-9daa-7a9dd1082f61 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.084 [2024-12-07 02:42:19.072292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:08.084 [2024-12-07 02:42:19.072560] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:08.084 [2024-12-07 02:42:19.072638] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:08.084 NewBaseBdev 00:09:08.084 [2024-12-07 02:42:19.072943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:08.084 [2024-12-07 02:42:19.073067] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:08.084 [2024-12-07 02:42:19.073083] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:08.084 [2024-12-07 02:42:19.073195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.084 [ 00:09:08.084 { 00:09:08.084 "name": "NewBaseBdev", 00:09:08.084 "aliases": [ 00:09:08.084 "35d0324c-d33d-48e7-9daa-7a9dd1082f61" 00:09:08.084 ], 00:09:08.084 "product_name": "Malloc disk", 00:09:08.084 "block_size": 512, 00:09:08.084 "num_blocks": 65536, 00:09:08.084 "uuid": "35d0324c-d33d-48e7-9daa-7a9dd1082f61", 00:09:08.084 "assigned_rate_limits": { 00:09:08.084 "rw_ios_per_sec": 0, 00:09:08.084 "rw_mbytes_per_sec": 0, 00:09:08.084 "r_mbytes_per_sec": 0, 00:09:08.084 "w_mbytes_per_sec": 0 00:09:08.084 }, 00:09:08.084 "claimed": true, 00:09:08.084 "claim_type": "exclusive_write", 00:09:08.084 "zoned": false, 00:09:08.084 "supported_io_types": { 00:09:08.084 "read": true, 00:09:08.084 "write": true, 00:09:08.084 "unmap": true, 00:09:08.084 "flush": true, 00:09:08.084 "reset": true, 00:09:08.084 "nvme_admin": false, 00:09:08.084 "nvme_io": false, 00:09:08.084 "nvme_io_md": false, 00:09:08.084 "write_zeroes": true, 00:09:08.084 "zcopy": true, 00:09:08.084 "get_zone_info": false, 00:09:08.084 "zone_management": false, 00:09:08.084 "zone_append": false, 00:09:08.084 "compare": false, 00:09:08.084 "compare_and_write": false, 00:09:08.084 "abort": true, 00:09:08.084 "seek_hole": false, 00:09:08.084 "seek_data": false, 00:09:08.084 "copy": true, 00:09:08.084 "nvme_iov_md": false 00:09:08.084 }, 00:09:08.084 "memory_domains": [ 00:09:08.084 { 00:09:08.084 "dma_device_id": "system", 00:09:08.084 "dma_device_type": 1 00:09:08.084 }, 00:09:08.084 { 00:09:08.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.084 "dma_device_type": 2 00:09:08.084 } 00:09:08.084 ], 00:09:08.084 "driver_specific": {} 00:09:08.084 } 00:09:08.084 ] 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.084 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.085 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.085 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.085 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.085 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.085 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.085 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.085 "name": "Existed_Raid", 00:09:08.085 "uuid": "68291e43-7faf-4387-91d4-076783350430", 00:09:08.085 "strip_size_kb": 64, 00:09:08.085 "state": "online", 00:09:08.085 "raid_level": "concat", 00:09:08.085 "superblock": true, 00:09:08.085 "num_base_bdevs": 3, 00:09:08.085 "num_base_bdevs_discovered": 3, 00:09:08.085 "num_base_bdevs_operational": 3, 00:09:08.085 "base_bdevs_list": [ 00:09:08.085 { 00:09:08.085 "name": "NewBaseBdev", 00:09:08.085 "uuid": "35d0324c-d33d-48e7-9daa-7a9dd1082f61", 00:09:08.085 "is_configured": true, 00:09:08.085 "data_offset": 2048, 00:09:08.085 "data_size": 63488 00:09:08.085 }, 00:09:08.085 { 00:09:08.085 "name": "BaseBdev2", 00:09:08.085 "uuid": "e6d2fdf4-b8a5-4c37-aafc-c7ca48566ad0", 00:09:08.085 "is_configured": true, 00:09:08.085 "data_offset": 2048, 00:09:08.085 "data_size": 63488 00:09:08.085 }, 00:09:08.085 { 00:09:08.085 "name": "BaseBdev3", 00:09:08.085 "uuid": "de662955-96f9-44fe-9e4f-c7aca1da96d0", 00:09:08.085 "is_configured": true, 00:09:08.085 "data_offset": 2048, 00:09:08.085 "data_size": 63488 00:09:08.085 } 00:09:08.085 ] 00:09:08.085 }' 00:09:08.085 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.085 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.655 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:08.655 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:08.655 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.655 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.655 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.655 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.656 [2024-12-07 02:42:19.511888] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.656 "name": "Existed_Raid", 00:09:08.656 "aliases": [ 00:09:08.656 "68291e43-7faf-4387-91d4-076783350430" 00:09:08.656 ], 00:09:08.656 "product_name": "Raid Volume", 00:09:08.656 "block_size": 512, 00:09:08.656 "num_blocks": 190464, 00:09:08.656 "uuid": "68291e43-7faf-4387-91d4-076783350430", 00:09:08.656 "assigned_rate_limits": { 00:09:08.656 "rw_ios_per_sec": 0, 00:09:08.656 "rw_mbytes_per_sec": 0, 00:09:08.656 "r_mbytes_per_sec": 0, 00:09:08.656 "w_mbytes_per_sec": 0 00:09:08.656 }, 00:09:08.656 "claimed": false, 00:09:08.656 "zoned": false, 00:09:08.656 "supported_io_types": { 00:09:08.656 "read": true, 00:09:08.656 "write": true, 00:09:08.656 "unmap": true, 00:09:08.656 "flush": true, 00:09:08.656 "reset": true, 00:09:08.656 "nvme_admin": false, 00:09:08.656 "nvme_io": false, 00:09:08.656 "nvme_io_md": false, 00:09:08.656 "write_zeroes": true, 00:09:08.656 "zcopy": false, 00:09:08.656 "get_zone_info": false, 00:09:08.656 "zone_management": false, 00:09:08.656 "zone_append": false, 00:09:08.656 "compare": false, 00:09:08.656 "compare_and_write": false, 00:09:08.656 "abort": false, 00:09:08.656 "seek_hole": false, 00:09:08.656 "seek_data": false, 00:09:08.656 "copy": false, 00:09:08.656 "nvme_iov_md": false 00:09:08.656 }, 00:09:08.656 "memory_domains": [ 00:09:08.656 { 00:09:08.656 "dma_device_id": "system", 00:09:08.656 "dma_device_type": 1 00:09:08.656 }, 00:09:08.656 { 00:09:08.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.656 "dma_device_type": 2 00:09:08.656 }, 00:09:08.656 { 00:09:08.656 "dma_device_id": "system", 00:09:08.656 "dma_device_type": 1 00:09:08.656 }, 00:09:08.656 { 00:09:08.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.656 "dma_device_type": 2 00:09:08.656 }, 00:09:08.656 { 00:09:08.656 "dma_device_id": "system", 00:09:08.656 "dma_device_type": 1 00:09:08.656 }, 00:09:08.656 { 00:09:08.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.656 "dma_device_type": 2 00:09:08.656 } 00:09:08.656 ], 00:09:08.656 "driver_specific": { 00:09:08.656 "raid": { 00:09:08.656 "uuid": "68291e43-7faf-4387-91d4-076783350430", 00:09:08.656 "strip_size_kb": 64, 00:09:08.656 "state": "online", 00:09:08.656 "raid_level": "concat", 00:09:08.656 "superblock": true, 00:09:08.656 "num_base_bdevs": 3, 00:09:08.656 "num_base_bdevs_discovered": 3, 00:09:08.656 "num_base_bdevs_operational": 3, 00:09:08.656 "base_bdevs_list": [ 00:09:08.656 { 00:09:08.656 "name": "NewBaseBdev", 00:09:08.656 "uuid": "35d0324c-d33d-48e7-9daa-7a9dd1082f61", 00:09:08.656 "is_configured": true, 00:09:08.656 "data_offset": 2048, 00:09:08.656 "data_size": 63488 00:09:08.656 }, 00:09:08.656 { 00:09:08.656 "name": "BaseBdev2", 00:09:08.656 "uuid": "e6d2fdf4-b8a5-4c37-aafc-c7ca48566ad0", 00:09:08.656 "is_configured": true, 00:09:08.656 "data_offset": 2048, 00:09:08.656 "data_size": 63488 00:09:08.656 }, 00:09:08.656 { 00:09:08.656 "name": "BaseBdev3", 00:09:08.656 "uuid": "de662955-96f9-44fe-9e4f-c7aca1da96d0", 00:09:08.656 "is_configured": true, 00:09:08.656 "data_offset": 2048, 00:09:08.656 "data_size": 63488 00:09:08.656 } 00:09:08.656 ] 00:09:08.656 } 00:09:08.656 } 00:09:08.656 }' 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:08.656 BaseBdev2 00:09:08.656 BaseBdev3' 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.656 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.917 [2024-12-07 02:42:19.783199] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.917 [2024-12-07 02:42:19.783265] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.917 [2024-12-07 02:42:19.783343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.917 [2024-12-07 02:42:19.783398] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.917 [2024-12-07 02:42:19.783410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77559 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77559 ']' 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77559 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77559 00:09:08.917 killing process with pid 77559 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77559' 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77559 00:09:08.917 [2024-12-07 02:42:19.827329] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:08.917 02:42:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77559 00:09:08.917 [2024-12-07 02:42:19.887299] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:09.489 ************************************ 00:09:09.489 END TEST raid_state_function_test_sb 00:09:09.489 02:42:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:09.489 00:09:09.489 real 0m9.038s 00:09:09.489 user 0m15.109s 00:09:09.489 sys 0m1.923s 00:09:09.489 02:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:09.489 02:42:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.489 ************************************ 00:09:09.489 02:42:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:09.489 02:42:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:09.489 02:42:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:09.489 02:42:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.489 ************************************ 00:09:09.489 START TEST raid_superblock_test 00:09:09.489 ************************************ 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=78168 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 78168 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 78168 ']' 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:09.489 02:42:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.489 [2024-12-07 02:42:20.434756] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:09.489 [2024-12-07 02:42:20.434901] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78168 ] 00:09:09.750 [2024-12-07 02:42:20.599486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.750 [2024-12-07 02:42:20.672309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.750 [2024-12-07 02:42:20.750417] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.750 [2024-12-07 02:42:20.750458] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.321 malloc1 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.321 [2024-12-07 02:42:21.293737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:10.321 [2024-12-07 02:42:21.293894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.321 [2024-12-07 02:42:21.293945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:10.321 [2024-12-07 02:42:21.293984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.321 [2024-12-07 02:42:21.296410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.321 [2024-12-07 02:42:21.296486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:10.321 pt1 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.321 malloc2 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.321 [2024-12-07 02:42:21.346680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:10.321 [2024-12-07 02:42:21.346876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.321 [2024-12-07 02:42:21.346921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:10.321 [2024-12-07 02:42:21.346946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.321 [2024-12-07 02:42:21.351734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.321 [2024-12-07 02:42:21.351803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:10.321 pt2 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.321 malloc3 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.321 [2024-12-07 02:42:21.386957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:10.321 [2024-12-07 02:42:21.387058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.321 [2024-12-07 02:42:21.387094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:10.321 [2024-12-07 02:42:21.387125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.321 [2024-12-07 02:42:21.389445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.321 [2024-12-07 02:42:21.389513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:10.321 pt3 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.321 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.603 [2024-12-07 02:42:21.398998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:10.603 [2024-12-07 02:42:21.401114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:10.603 [2024-12-07 02:42:21.401216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:10.603 [2024-12-07 02:42:21.401384] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:10.603 [2024-12-07 02:42:21.401449] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.603 [2024-12-07 02:42:21.401759] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:10.603 [2024-12-07 02:42:21.401932] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:10.603 [2024-12-07 02:42:21.401978] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:10.603 [2024-12-07 02:42:21.402132] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.603 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.603 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:10.603 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.603 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.603 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.604 "name": "raid_bdev1", 00:09:10.604 "uuid": "3a1ab822-3fd5-4ab9-b271-852760819593", 00:09:10.604 "strip_size_kb": 64, 00:09:10.604 "state": "online", 00:09:10.604 "raid_level": "concat", 00:09:10.604 "superblock": true, 00:09:10.604 "num_base_bdevs": 3, 00:09:10.604 "num_base_bdevs_discovered": 3, 00:09:10.604 "num_base_bdevs_operational": 3, 00:09:10.604 "base_bdevs_list": [ 00:09:10.604 { 00:09:10.604 "name": "pt1", 00:09:10.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.604 "is_configured": true, 00:09:10.604 "data_offset": 2048, 00:09:10.604 "data_size": 63488 00:09:10.604 }, 00:09:10.604 { 00:09:10.604 "name": "pt2", 00:09:10.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.604 "is_configured": true, 00:09:10.604 "data_offset": 2048, 00:09:10.604 "data_size": 63488 00:09:10.604 }, 00:09:10.604 { 00:09:10.604 "name": "pt3", 00:09:10.604 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.604 "is_configured": true, 00:09:10.604 "data_offset": 2048, 00:09:10.604 "data_size": 63488 00:09:10.604 } 00:09:10.604 ] 00:09:10.604 }' 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.604 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.864 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:10.864 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:10.864 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.864 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.864 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.864 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.864 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.864 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:10.864 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.864 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.864 [2024-12-07 02:42:21.846489] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.864 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.864 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.864 "name": "raid_bdev1", 00:09:10.864 "aliases": [ 00:09:10.864 "3a1ab822-3fd5-4ab9-b271-852760819593" 00:09:10.864 ], 00:09:10.864 "product_name": "Raid Volume", 00:09:10.864 "block_size": 512, 00:09:10.864 "num_blocks": 190464, 00:09:10.864 "uuid": "3a1ab822-3fd5-4ab9-b271-852760819593", 00:09:10.864 "assigned_rate_limits": { 00:09:10.864 "rw_ios_per_sec": 0, 00:09:10.864 "rw_mbytes_per_sec": 0, 00:09:10.864 "r_mbytes_per_sec": 0, 00:09:10.864 "w_mbytes_per_sec": 0 00:09:10.864 }, 00:09:10.864 "claimed": false, 00:09:10.864 "zoned": false, 00:09:10.864 "supported_io_types": { 00:09:10.864 "read": true, 00:09:10.864 "write": true, 00:09:10.864 "unmap": true, 00:09:10.864 "flush": true, 00:09:10.864 "reset": true, 00:09:10.864 "nvme_admin": false, 00:09:10.864 "nvme_io": false, 00:09:10.864 "nvme_io_md": false, 00:09:10.864 "write_zeroes": true, 00:09:10.864 "zcopy": false, 00:09:10.864 "get_zone_info": false, 00:09:10.864 "zone_management": false, 00:09:10.864 "zone_append": false, 00:09:10.864 "compare": false, 00:09:10.864 "compare_and_write": false, 00:09:10.864 "abort": false, 00:09:10.864 "seek_hole": false, 00:09:10.864 "seek_data": false, 00:09:10.864 "copy": false, 00:09:10.864 "nvme_iov_md": false 00:09:10.864 }, 00:09:10.864 "memory_domains": [ 00:09:10.864 { 00:09:10.864 "dma_device_id": "system", 00:09:10.864 "dma_device_type": 1 00:09:10.864 }, 00:09:10.864 { 00:09:10.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.864 "dma_device_type": 2 00:09:10.864 }, 00:09:10.864 { 00:09:10.864 "dma_device_id": "system", 00:09:10.864 "dma_device_type": 1 00:09:10.864 }, 00:09:10.864 { 00:09:10.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.864 "dma_device_type": 2 00:09:10.864 }, 00:09:10.864 { 00:09:10.864 "dma_device_id": "system", 00:09:10.865 "dma_device_type": 1 00:09:10.865 }, 00:09:10.865 { 00:09:10.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.865 "dma_device_type": 2 00:09:10.865 } 00:09:10.865 ], 00:09:10.865 "driver_specific": { 00:09:10.865 "raid": { 00:09:10.865 "uuid": "3a1ab822-3fd5-4ab9-b271-852760819593", 00:09:10.865 "strip_size_kb": 64, 00:09:10.865 "state": "online", 00:09:10.865 "raid_level": "concat", 00:09:10.865 "superblock": true, 00:09:10.865 "num_base_bdevs": 3, 00:09:10.865 "num_base_bdevs_discovered": 3, 00:09:10.865 "num_base_bdevs_operational": 3, 00:09:10.865 "base_bdevs_list": [ 00:09:10.865 { 00:09:10.865 "name": "pt1", 00:09:10.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.865 "is_configured": true, 00:09:10.865 "data_offset": 2048, 00:09:10.865 "data_size": 63488 00:09:10.865 }, 00:09:10.865 { 00:09:10.865 "name": "pt2", 00:09:10.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.865 "is_configured": true, 00:09:10.865 "data_offset": 2048, 00:09:10.865 "data_size": 63488 00:09:10.865 }, 00:09:10.865 { 00:09:10.865 "name": "pt3", 00:09:10.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.865 "is_configured": true, 00:09:10.865 "data_offset": 2048, 00:09:10.865 "data_size": 63488 00:09:10.865 } 00:09:10.865 ] 00:09:10.865 } 00:09:10.865 } 00:09:10.865 }' 00:09:10.865 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.865 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:10.865 pt2 00:09:10.865 pt3' 00:09:10.865 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.124 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.124 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.124 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:11.124 02:42:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.124 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.124 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.124 02:42:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.124 [2024-12-07 02:42:22.125957] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.124 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.125 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3a1ab822-3fd5-4ab9-b271-852760819593 00:09:11.125 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3a1ab822-3fd5-4ab9-b271-852760819593 ']' 00:09:11.125 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:11.125 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.125 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.125 [2024-12-07 02:42:22.169638] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.125 [2024-12-07 02:42:22.169671] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.125 [2024-12-07 02:42:22.169758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.125 [2024-12-07 02:42:22.169821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.125 [2024-12-07 02:42:22.169837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:11.125 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.125 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.125 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:11.125 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.125 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.125 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.385 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.385 [2024-12-07 02:42:22.309407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:11.385 [2024-12-07 02:42:22.311494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:11.385 [2024-12-07 02:42:22.311542] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:11.385 [2024-12-07 02:42:22.311670] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:11.385 [2024-12-07 02:42:22.311738] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:11.385 [2024-12-07 02:42:22.311790] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:11.385 [2024-12-07 02:42:22.311846] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.385 [2024-12-07 02:42:22.311876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:11.385 request: 00:09:11.385 { 00:09:11.385 "name": "raid_bdev1", 00:09:11.385 "raid_level": "concat", 00:09:11.385 "base_bdevs": [ 00:09:11.385 "malloc1", 00:09:11.385 "malloc2", 00:09:11.385 "malloc3" 00:09:11.385 ], 00:09:11.385 "strip_size_kb": 64, 00:09:11.385 "superblock": false, 00:09:11.386 "method": "bdev_raid_create", 00:09:11.386 "req_id": 1 00:09:11.386 } 00:09:11.386 Got JSON-RPC error response 00:09:11.386 response: 00:09:11.386 { 00:09:11.386 "code": -17, 00:09:11.386 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:11.386 } 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.386 [2024-12-07 02:42:22.377250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:11.386 [2024-12-07 02:42:22.377297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.386 [2024-12-07 02:42:22.377313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:11.386 [2024-12-07 02:42:22.377324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.386 [2024-12-07 02:42:22.379706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.386 [2024-12-07 02:42:22.379751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:11.386 [2024-12-07 02:42:22.379814] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:11.386 [2024-12-07 02:42:22.379849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:11.386 pt1 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.386 "name": "raid_bdev1", 00:09:11.386 "uuid": "3a1ab822-3fd5-4ab9-b271-852760819593", 00:09:11.386 "strip_size_kb": 64, 00:09:11.386 "state": "configuring", 00:09:11.386 "raid_level": "concat", 00:09:11.386 "superblock": true, 00:09:11.386 "num_base_bdevs": 3, 00:09:11.386 "num_base_bdevs_discovered": 1, 00:09:11.386 "num_base_bdevs_operational": 3, 00:09:11.386 "base_bdevs_list": [ 00:09:11.386 { 00:09:11.386 "name": "pt1", 00:09:11.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.386 "is_configured": true, 00:09:11.386 "data_offset": 2048, 00:09:11.386 "data_size": 63488 00:09:11.386 }, 00:09:11.386 { 00:09:11.386 "name": null, 00:09:11.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.386 "is_configured": false, 00:09:11.386 "data_offset": 2048, 00:09:11.386 "data_size": 63488 00:09:11.386 }, 00:09:11.386 { 00:09:11.386 "name": null, 00:09:11.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.386 "is_configured": false, 00:09:11.386 "data_offset": 2048, 00:09:11.386 "data_size": 63488 00:09:11.386 } 00:09:11.386 ] 00:09:11.386 }' 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.386 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.956 [2024-12-07 02:42:22.828532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:11.956 [2024-12-07 02:42:22.828653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:11.956 [2024-12-07 02:42:22.828694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:11.956 [2024-12-07 02:42:22.828744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:11.956 [2024-12-07 02:42:22.829165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:11.956 [2024-12-07 02:42:22.829223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:11.956 [2024-12-07 02:42:22.829316] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:11.956 [2024-12-07 02:42:22.829368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:11.956 pt2 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.956 [2024-12-07 02:42:22.840522] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.956 "name": "raid_bdev1", 00:09:11.956 "uuid": "3a1ab822-3fd5-4ab9-b271-852760819593", 00:09:11.956 "strip_size_kb": 64, 00:09:11.956 "state": "configuring", 00:09:11.956 "raid_level": "concat", 00:09:11.956 "superblock": true, 00:09:11.956 "num_base_bdevs": 3, 00:09:11.956 "num_base_bdevs_discovered": 1, 00:09:11.956 "num_base_bdevs_operational": 3, 00:09:11.956 "base_bdevs_list": [ 00:09:11.956 { 00:09:11.956 "name": "pt1", 00:09:11.956 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:11.956 "is_configured": true, 00:09:11.956 "data_offset": 2048, 00:09:11.956 "data_size": 63488 00:09:11.956 }, 00:09:11.956 { 00:09:11.956 "name": null, 00:09:11.956 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:11.956 "is_configured": false, 00:09:11.956 "data_offset": 0, 00:09:11.956 "data_size": 63488 00:09:11.956 }, 00:09:11.956 { 00:09:11.956 "name": null, 00:09:11.956 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:11.956 "is_configured": false, 00:09:11.956 "data_offset": 2048, 00:09:11.956 "data_size": 63488 00:09:11.956 } 00:09:11.956 ] 00:09:11.956 }' 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.956 02:42:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.217 [2024-12-07 02:42:23.263789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:12.217 [2024-12-07 02:42:23.263897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.217 [2024-12-07 02:42:23.263922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:12.217 [2024-12-07 02:42:23.263932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.217 [2024-12-07 02:42:23.264358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.217 [2024-12-07 02:42:23.264382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:12.217 [2024-12-07 02:42:23.264463] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:12.217 [2024-12-07 02:42:23.264487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:12.217 pt2 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.217 [2024-12-07 02:42:23.275737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:12.217 [2024-12-07 02:42:23.275779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.217 [2024-12-07 02:42:23.275800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:12.217 [2024-12-07 02:42:23.275808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.217 [2024-12-07 02:42:23.276171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.217 [2024-12-07 02:42:23.276199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:12.217 [2024-12-07 02:42:23.276260] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:12.217 [2024-12-07 02:42:23.276278] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:12.217 [2024-12-07 02:42:23.276375] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:12.217 [2024-12-07 02:42:23.276388] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:12.217 [2024-12-07 02:42:23.276639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:12.217 [2024-12-07 02:42:23.276750] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:12.217 [2024-12-07 02:42:23.276762] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:12.217 [2024-12-07 02:42:23.276868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.217 pt3 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.217 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.478 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.478 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.478 "name": "raid_bdev1", 00:09:12.478 "uuid": "3a1ab822-3fd5-4ab9-b271-852760819593", 00:09:12.478 "strip_size_kb": 64, 00:09:12.478 "state": "online", 00:09:12.478 "raid_level": "concat", 00:09:12.478 "superblock": true, 00:09:12.478 "num_base_bdevs": 3, 00:09:12.478 "num_base_bdevs_discovered": 3, 00:09:12.478 "num_base_bdevs_operational": 3, 00:09:12.478 "base_bdevs_list": [ 00:09:12.478 { 00:09:12.478 "name": "pt1", 00:09:12.478 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.478 "is_configured": true, 00:09:12.478 "data_offset": 2048, 00:09:12.478 "data_size": 63488 00:09:12.478 }, 00:09:12.478 { 00:09:12.478 "name": "pt2", 00:09:12.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.478 "is_configured": true, 00:09:12.478 "data_offset": 2048, 00:09:12.478 "data_size": 63488 00:09:12.478 }, 00:09:12.478 { 00:09:12.478 "name": "pt3", 00:09:12.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.478 "is_configured": true, 00:09:12.478 "data_offset": 2048, 00:09:12.478 "data_size": 63488 00:09:12.478 } 00:09:12.478 ] 00:09:12.478 }' 00:09:12.478 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.478 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.738 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:12.738 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:12.738 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:12.738 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:12.738 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:12.738 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:12.738 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:12.738 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:12.738 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.738 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.738 [2024-12-07 02:42:23.727345] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.738 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.738 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:12.738 "name": "raid_bdev1", 00:09:12.738 "aliases": [ 00:09:12.738 "3a1ab822-3fd5-4ab9-b271-852760819593" 00:09:12.738 ], 00:09:12.738 "product_name": "Raid Volume", 00:09:12.738 "block_size": 512, 00:09:12.738 "num_blocks": 190464, 00:09:12.738 "uuid": "3a1ab822-3fd5-4ab9-b271-852760819593", 00:09:12.738 "assigned_rate_limits": { 00:09:12.738 "rw_ios_per_sec": 0, 00:09:12.738 "rw_mbytes_per_sec": 0, 00:09:12.738 "r_mbytes_per_sec": 0, 00:09:12.738 "w_mbytes_per_sec": 0 00:09:12.738 }, 00:09:12.738 "claimed": false, 00:09:12.738 "zoned": false, 00:09:12.738 "supported_io_types": { 00:09:12.738 "read": true, 00:09:12.738 "write": true, 00:09:12.738 "unmap": true, 00:09:12.738 "flush": true, 00:09:12.738 "reset": true, 00:09:12.738 "nvme_admin": false, 00:09:12.738 "nvme_io": false, 00:09:12.738 "nvme_io_md": false, 00:09:12.738 "write_zeroes": true, 00:09:12.738 "zcopy": false, 00:09:12.738 "get_zone_info": false, 00:09:12.738 "zone_management": false, 00:09:12.738 "zone_append": false, 00:09:12.738 "compare": false, 00:09:12.738 "compare_and_write": false, 00:09:12.738 "abort": false, 00:09:12.738 "seek_hole": false, 00:09:12.738 "seek_data": false, 00:09:12.738 "copy": false, 00:09:12.738 "nvme_iov_md": false 00:09:12.738 }, 00:09:12.738 "memory_domains": [ 00:09:12.738 { 00:09:12.738 "dma_device_id": "system", 00:09:12.738 "dma_device_type": 1 00:09:12.738 }, 00:09:12.738 { 00:09:12.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.738 "dma_device_type": 2 00:09:12.738 }, 00:09:12.738 { 00:09:12.738 "dma_device_id": "system", 00:09:12.738 "dma_device_type": 1 00:09:12.738 }, 00:09:12.738 { 00:09:12.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.738 "dma_device_type": 2 00:09:12.738 }, 00:09:12.738 { 00:09:12.738 "dma_device_id": "system", 00:09:12.739 "dma_device_type": 1 00:09:12.739 }, 00:09:12.739 { 00:09:12.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.739 "dma_device_type": 2 00:09:12.739 } 00:09:12.739 ], 00:09:12.739 "driver_specific": { 00:09:12.739 "raid": { 00:09:12.739 "uuid": "3a1ab822-3fd5-4ab9-b271-852760819593", 00:09:12.739 "strip_size_kb": 64, 00:09:12.739 "state": "online", 00:09:12.739 "raid_level": "concat", 00:09:12.739 "superblock": true, 00:09:12.739 "num_base_bdevs": 3, 00:09:12.739 "num_base_bdevs_discovered": 3, 00:09:12.739 "num_base_bdevs_operational": 3, 00:09:12.739 "base_bdevs_list": [ 00:09:12.739 { 00:09:12.739 "name": "pt1", 00:09:12.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.739 "is_configured": true, 00:09:12.739 "data_offset": 2048, 00:09:12.739 "data_size": 63488 00:09:12.739 }, 00:09:12.739 { 00:09:12.739 "name": "pt2", 00:09:12.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.739 "is_configured": true, 00:09:12.739 "data_offset": 2048, 00:09:12.739 "data_size": 63488 00:09:12.739 }, 00:09:12.739 { 00:09:12.739 "name": "pt3", 00:09:12.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.739 "is_configured": true, 00:09:12.739 "data_offset": 2048, 00:09:12.739 "data_size": 63488 00:09:12.739 } 00:09:12.739 ] 00:09:12.739 } 00:09:12.739 } 00:09:12.739 }' 00:09:12.739 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.739 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:12.739 pt2 00:09:12.739 pt3' 00:09:12.739 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.999 02:42:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:12.999 [2024-12-07 02:42:24.006852] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3a1ab822-3fd5-4ab9-b271-852760819593 '!=' 3a1ab822-3fd5-4ab9-b271-852760819593 ']' 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 78168 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 78168 ']' 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 78168 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.999 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78168 00:09:13.259 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:13.259 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:13.259 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78168' 00:09:13.259 killing process with pid 78168 00:09:13.259 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 78168 00:09:13.259 [2024-12-07 02:42:24.096774] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:13.259 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 78168 00:09:13.259 [2024-12-07 02:42:24.096973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.259 [2024-12-07 02:42:24.097050] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.259 [2024-12-07 02:42:24.097125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:13.259 [2024-12-07 02:42:24.159919] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.518 02:42:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:13.518 00:09:13.518 real 0m4.189s 00:09:13.518 user 0m6.384s 00:09:13.518 sys 0m0.989s 00:09:13.518 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.518 02:42:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.518 ************************************ 00:09:13.518 END TEST raid_superblock_test 00:09:13.518 ************************************ 00:09:13.518 02:42:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:13.518 02:42:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:13.518 02:42:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.518 02:42:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.779 ************************************ 00:09:13.779 START TEST raid_read_error_test 00:09:13.779 ************************************ 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.I7tXnQgZs0 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78410 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78410 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 78410 ']' 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:13.779 02:42:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.779 [2024-12-07 02:42:24.717494] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:13.779 [2024-12-07 02:42:24.717660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78410 ] 00:09:14.039 [2024-12-07 02:42:24.882509] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.039 [2024-12-07 02:42:24.954130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.039 [2024-12-07 02:42:25.029813] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.039 [2024-12-07 02:42:25.029861] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.608 BaseBdev1_malloc 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.608 true 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.608 [2024-12-07 02:42:25.571903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:14.608 [2024-12-07 02:42:25.572039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.608 [2024-12-07 02:42:25.572083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:14.608 [2024-12-07 02:42:25.572112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.608 [2024-12-07 02:42:25.574504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.608 [2024-12-07 02:42:25.574577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:14.608 BaseBdev1 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.608 BaseBdev2_malloc 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.608 true 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.608 [2024-12-07 02:42:25.634463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:14.608 [2024-12-07 02:42:25.634536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.608 [2024-12-07 02:42:25.634566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:14.608 [2024-12-07 02:42:25.634598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.608 [2024-12-07 02:42:25.637374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.608 [2024-12-07 02:42:25.637412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:14.608 BaseBdev2 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.608 BaseBdev3_malloc 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.608 true 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.608 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.608 [2024-12-07 02:42:25.680832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:14.608 [2024-12-07 02:42:25.680950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.608 [2024-12-07 02:42:25.680977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:14.608 [2024-12-07 02:42:25.680986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.608 [2024-12-07 02:42:25.683304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.608 [2024-12-07 02:42:25.683341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:14.868 BaseBdev3 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 [2024-12-07 02:42:25.692885] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.868 [2024-12-07 02:42:25.694943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.868 [2024-12-07 02:42:25.695020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.868 [2024-12-07 02:42:25.695199] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:14.868 [2024-12-07 02:42:25.695214] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:14.868 [2024-12-07 02:42:25.695466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:14.868 [2024-12-07 02:42:25.695630] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:14.868 [2024-12-07 02:42:25.695642] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:14.868 [2024-12-07 02:42:25.695783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.868 "name": "raid_bdev1", 00:09:14.868 "uuid": "3e86a0a1-6367-40d2-8d65-4a71b3eab1de", 00:09:14.868 "strip_size_kb": 64, 00:09:14.868 "state": "online", 00:09:14.868 "raid_level": "concat", 00:09:14.868 "superblock": true, 00:09:14.868 "num_base_bdevs": 3, 00:09:14.868 "num_base_bdevs_discovered": 3, 00:09:14.868 "num_base_bdevs_operational": 3, 00:09:14.868 "base_bdevs_list": [ 00:09:14.868 { 00:09:14.868 "name": "BaseBdev1", 00:09:14.868 "uuid": "37ec37bf-19a9-560d-979c-a2e96a5c6d9d", 00:09:14.868 "is_configured": true, 00:09:14.868 "data_offset": 2048, 00:09:14.868 "data_size": 63488 00:09:14.868 }, 00:09:14.868 { 00:09:14.868 "name": "BaseBdev2", 00:09:14.868 "uuid": "7d7ee232-b30c-5096-8b0e-f0da655f9bf9", 00:09:14.868 "is_configured": true, 00:09:14.868 "data_offset": 2048, 00:09:14.868 "data_size": 63488 00:09:14.868 }, 00:09:14.868 { 00:09:14.868 "name": "BaseBdev3", 00:09:14.868 "uuid": "3d0d934d-3a48-5e17-811e-83cbe03d9812", 00:09:14.868 "is_configured": true, 00:09:14.868 "data_offset": 2048, 00:09:14.868 "data_size": 63488 00:09:14.868 } 00:09:14.868 ] 00:09:14.868 }' 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.868 02:42:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.128 02:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:15.128 02:42:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:15.388 [2024-12-07 02:42:26.232399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.329 "name": "raid_bdev1", 00:09:16.329 "uuid": "3e86a0a1-6367-40d2-8d65-4a71b3eab1de", 00:09:16.329 "strip_size_kb": 64, 00:09:16.329 "state": "online", 00:09:16.329 "raid_level": "concat", 00:09:16.329 "superblock": true, 00:09:16.329 "num_base_bdevs": 3, 00:09:16.329 "num_base_bdevs_discovered": 3, 00:09:16.329 "num_base_bdevs_operational": 3, 00:09:16.329 "base_bdevs_list": [ 00:09:16.329 { 00:09:16.329 "name": "BaseBdev1", 00:09:16.329 "uuid": "37ec37bf-19a9-560d-979c-a2e96a5c6d9d", 00:09:16.329 "is_configured": true, 00:09:16.329 "data_offset": 2048, 00:09:16.329 "data_size": 63488 00:09:16.329 }, 00:09:16.329 { 00:09:16.329 "name": "BaseBdev2", 00:09:16.329 "uuid": "7d7ee232-b30c-5096-8b0e-f0da655f9bf9", 00:09:16.329 "is_configured": true, 00:09:16.329 "data_offset": 2048, 00:09:16.329 "data_size": 63488 00:09:16.329 }, 00:09:16.329 { 00:09:16.329 "name": "BaseBdev3", 00:09:16.329 "uuid": "3d0d934d-3a48-5e17-811e-83cbe03d9812", 00:09:16.329 "is_configured": true, 00:09:16.329 "data_offset": 2048, 00:09:16.329 "data_size": 63488 00:09:16.329 } 00:09:16.329 ] 00:09:16.329 }' 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.329 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.589 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:16.589 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.589 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.589 [2024-12-07 02:42:27.653233] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.589 [2024-12-07 02:42:27.653350] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.589 [2024-12-07 02:42:27.655880] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.589 [2024-12-07 02:42:27.655981] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.589 [2024-12-07 02:42:27.656038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.589 [2024-12-07 02:42:27.656084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:16.589 { 00:09:16.589 "results": [ 00:09:16.589 { 00:09:16.589 "job": "raid_bdev1", 00:09:16.589 "core_mask": "0x1", 00:09:16.589 "workload": "randrw", 00:09:16.589 "percentage": 50, 00:09:16.589 "status": "finished", 00:09:16.589 "queue_depth": 1, 00:09:16.589 "io_size": 131072, 00:09:16.589 "runtime": 1.42152, 00:09:16.589 "iops": 15214.699758005516, 00:09:16.589 "mibps": 1901.8374697506895, 00:09:16.589 "io_failed": 1, 00:09:16.589 "io_timeout": 0, 00:09:16.590 "avg_latency_us": 92.2397190735954, 00:09:16.590 "min_latency_us": 24.370305676855896, 00:09:16.590 "max_latency_us": 1366.5257641921398 00:09:16.590 } 00:09:16.590 ], 00:09:16.590 "core_count": 1 00:09:16.590 } 00:09:16.590 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.590 02:42:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78410 00:09:16.590 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 78410 ']' 00:09:16.590 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 78410 00:09:16.590 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:16.850 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.850 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78410 00:09:16.850 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:16.850 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:16.850 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78410' 00:09:16.850 killing process with pid 78410 00:09:16.850 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 78410 00:09:16.850 [2024-12-07 02:42:27.705711] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.850 02:42:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 78410 00:09:16.850 [2024-12-07 02:42:27.753952] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.111 02:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:17.111 02:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.I7tXnQgZs0 00:09:17.111 02:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:17.111 02:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:17.111 ************************************ 00:09:17.111 END TEST raid_read_error_test 00:09:17.111 ************************************ 00:09:17.111 02:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:17.111 02:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.111 02:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.111 02:42:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:17.111 00:09:17.111 real 0m3.529s 00:09:17.111 user 0m4.300s 00:09:17.111 sys 0m0.676s 00:09:17.111 02:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.111 02:42:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.372 02:42:28 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:17.372 02:42:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:17.372 02:42:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.372 02:42:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.372 ************************************ 00:09:17.372 START TEST raid_write_error_test 00:09:17.372 ************************************ 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4mBxjDC88o 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78545 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78545 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78545 ']' 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.372 02:42:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.372 [2024-12-07 02:42:28.311870] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:17.372 [2024-12-07 02:42:28.312506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78545 ] 00:09:17.632 [2024-12-07 02:42:28.457593] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.632 [2024-12-07 02:42:28.526201] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.632 [2024-12-07 02:42:28.602772] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.632 [2024-12-07 02:42:28.602809] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.204 BaseBdev1_malloc 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.204 true 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.204 [2024-12-07 02:42:29.173139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:18.204 [2024-12-07 02:42:29.173208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.204 [2024-12-07 02:42:29.173228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:18.204 [2024-12-07 02:42:29.173237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.204 [2024-12-07 02:42:29.175638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.204 [2024-12-07 02:42:29.175751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:18.204 BaseBdev1 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.204 BaseBdev2_malloc 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.204 true 00:09:18.204 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.205 [2024-12-07 02:42:29.230618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:18.205 [2024-12-07 02:42:29.230665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.205 [2024-12-07 02:42:29.230684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:18.205 [2024-12-07 02:42:29.230692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.205 [2024-12-07 02:42:29.233007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.205 [2024-12-07 02:42:29.233078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:18.205 BaseBdev2 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.205 BaseBdev3_malloc 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.205 true 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.205 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.205 [2024-12-07 02:42:29.277239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:18.205 [2024-12-07 02:42:29.277284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.205 [2024-12-07 02:42:29.277302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:18.205 [2024-12-07 02:42:29.277311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.205 [2024-12-07 02:42:29.279641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.205 [2024-12-07 02:42:29.279673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:18.465 BaseBdev3 00:09:18.465 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.465 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:18.465 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.465 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.465 [2024-12-07 02:42:29.289263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.465 [2024-12-07 02:42:29.291259] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.465 [2024-12-07 02:42:29.291401] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.465 [2024-12-07 02:42:29.291612] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:18.465 [2024-12-07 02:42:29.291636] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:18.465 [2024-12-07 02:42:29.291891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:18.465 [2024-12-07 02:42:29.292038] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:18.465 [2024-12-07 02:42:29.292047] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:18.465 [2024-12-07 02:42:29.292168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.465 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.465 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:18.465 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.465 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.465 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.466 "name": "raid_bdev1", 00:09:18.466 "uuid": "e3eaa7ae-3488-4441-8bce-5bceda2c7d0c", 00:09:18.466 "strip_size_kb": 64, 00:09:18.466 "state": "online", 00:09:18.466 "raid_level": "concat", 00:09:18.466 "superblock": true, 00:09:18.466 "num_base_bdevs": 3, 00:09:18.466 "num_base_bdevs_discovered": 3, 00:09:18.466 "num_base_bdevs_operational": 3, 00:09:18.466 "base_bdevs_list": [ 00:09:18.466 { 00:09:18.466 "name": "BaseBdev1", 00:09:18.466 "uuid": "0d579d5d-24e0-5825-89c4-1ef95afb23e8", 00:09:18.466 "is_configured": true, 00:09:18.466 "data_offset": 2048, 00:09:18.466 "data_size": 63488 00:09:18.466 }, 00:09:18.466 { 00:09:18.466 "name": "BaseBdev2", 00:09:18.466 "uuid": "6de14567-b336-5081-9f29-cec2568e1384", 00:09:18.466 "is_configured": true, 00:09:18.466 "data_offset": 2048, 00:09:18.466 "data_size": 63488 00:09:18.466 }, 00:09:18.466 { 00:09:18.466 "name": "BaseBdev3", 00:09:18.466 "uuid": "8020fd42-bb4b-544d-a478-589338b702a4", 00:09:18.466 "is_configured": true, 00:09:18.466 "data_offset": 2048, 00:09:18.466 "data_size": 63488 00:09:18.466 } 00:09:18.466 ] 00:09:18.466 }' 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.466 02:42:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.726 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:18.726 02:42:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:18.726 [2024-12-07 02:42:29.788872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.665 02:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.942 02:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.942 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.942 "name": "raid_bdev1", 00:09:19.942 "uuid": "e3eaa7ae-3488-4441-8bce-5bceda2c7d0c", 00:09:19.942 "strip_size_kb": 64, 00:09:19.942 "state": "online", 00:09:19.942 "raid_level": "concat", 00:09:19.942 "superblock": true, 00:09:19.942 "num_base_bdevs": 3, 00:09:19.942 "num_base_bdevs_discovered": 3, 00:09:19.942 "num_base_bdevs_operational": 3, 00:09:19.942 "base_bdevs_list": [ 00:09:19.942 { 00:09:19.942 "name": "BaseBdev1", 00:09:19.942 "uuid": "0d579d5d-24e0-5825-89c4-1ef95afb23e8", 00:09:19.942 "is_configured": true, 00:09:19.942 "data_offset": 2048, 00:09:19.942 "data_size": 63488 00:09:19.942 }, 00:09:19.942 { 00:09:19.942 "name": "BaseBdev2", 00:09:19.942 "uuid": "6de14567-b336-5081-9f29-cec2568e1384", 00:09:19.942 "is_configured": true, 00:09:19.942 "data_offset": 2048, 00:09:19.942 "data_size": 63488 00:09:19.942 }, 00:09:19.942 { 00:09:19.942 "name": "BaseBdev3", 00:09:19.942 "uuid": "8020fd42-bb4b-544d-a478-589338b702a4", 00:09:19.942 "is_configured": true, 00:09:19.942 "data_offset": 2048, 00:09:19.942 "data_size": 63488 00:09:19.942 } 00:09:19.942 ] 00:09:19.942 }' 00:09:19.942 02:42:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.942 02:42:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.222 02:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.222 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.222 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.222 [2024-12-07 02:42:31.173348] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.222 [2024-12-07 02:42:31.173395] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.222 [2024-12-07 02:42:31.175921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.222 [2024-12-07 02:42:31.175992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.222 [2024-12-07 02:42:31.176029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.222 [2024-12-07 02:42:31.176043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:20.222 { 00:09:20.222 "results": [ 00:09:20.222 { 00:09:20.222 "job": "raid_bdev1", 00:09:20.222 "core_mask": "0x1", 00:09:20.222 "workload": "randrw", 00:09:20.222 "percentage": 50, 00:09:20.222 "status": "finished", 00:09:20.222 "queue_depth": 1, 00:09:20.222 "io_size": 131072, 00:09:20.222 "runtime": 1.385101, 00:09:20.222 "iops": 15191.671943056861, 00:09:20.222 "mibps": 1898.9589928821076, 00:09:20.222 "io_failed": 1, 00:09:20.223 "io_timeout": 0, 00:09:20.223 "avg_latency_us": 92.36190748533829, 00:09:20.223 "min_latency_us": 24.146724890829695, 00:09:20.223 "max_latency_us": 1380.8349344978167 00:09:20.223 } 00:09:20.223 ], 00:09:20.223 "core_count": 1 00:09:20.223 } 00:09:20.223 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.223 02:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78545 00:09:20.223 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78545 ']' 00:09:20.223 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78545 00:09:20.223 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:20.223 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.223 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78545 00:09:20.223 killing process with pid 78545 00:09:20.223 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.223 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.223 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78545' 00:09:20.223 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78545 00:09:20.223 [2024-12-07 02:42:31.219681] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.223 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78545 00:09:20.223 [2024-12-07 02:42:31.267434] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.794 02:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4mBxjDC88o 00:09:20.794 02:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:20.794 02:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:20.794 02:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:20.794 02:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:20.794 02:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.794 02:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.794 02:42:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:20.794 00:09:20.794 real 0m3.431s 00:09:20.794 user 0m4.177s 00:09:20.794 sys 0m0.623s 00:09:20.794 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.794 ************************************ 00:09:20.794 END TEST raid_write_error_test 00:09:20.794 ************************************ 00:09:20.794 02:42:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.794 02:42:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:20.794 02:42:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:20.794 02:42:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:20.794 02:42:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.794 02:42:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.794 ************************************ 00:09:20.794 START TEST raid_state_function_test 00:09:20.794 ************************************ 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78672 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78672' 00:09:20.794 Process raid pid: 78672 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78672 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78672 ']' 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.794 02:42:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.794 [2024-12-07 02:42:31.811092] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:20.794 [2024-12-07 02:42:31.811742] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.055 [2024-12-07 02:42:31.977303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.055 [2024-12-07 02:42:32.046685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.055 [2024-12-07 02:42:32.123305] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.055 [2024-12-07 02:42:32.123431] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.625 02:42:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.625 02:42:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:21.625 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.625 02:42:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.625 02:42:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.625 [2024-12-07 02:42:32.634846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.625 [2024-12-07 02:42:32.635146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.625 [2024-12-07 02:42:32.635167] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.625 [2024-12-07 02:42:32.635218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.625 [2024-12-07 02:42:32.635226] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.625 [2024-12-07 02:42:32.635273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.625 02:42:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.625 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.625 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.625 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.625 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.625 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.626 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.626 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.626 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.626 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.626 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.626 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.626 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.626 02:42:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.626 02:42:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.626 02:42:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.626 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.626 "name": "Existed_Raid", 00:09:21.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.626 "strip_size_kb": 0, 00:09:21.626 "state": "configuring", 00:09:21.626 "raid_level": "raid1", 00:09:21.626 "superblock": false, 00:09:21.626 "num_base_bdevs": 3, 00:09:21.626 "num_base_bdevs_discovered": 0, 00:09:21.626 "num_base_bdevs_operational": 3, 00:09:21.626 "base_bdevs_list": [ 00:09:21.626 { 00:09:21.626 "name": "BaseBdev1", 00:09:21.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.626 "is_configured": false, 00:09:21.626 "data_offset": 0, 00:09:21.626 "data_size": 0 00:09:21.626 }, 00:09:21.626 { 00:09:21.626 "name": "BaseBdev2", 00:09:21.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.626 "is_configured": false, 00:09:21.626 "data_offset": 0, 00:09:21.626 "data_size": 0 00:09:21.626 }, 00:09:21.626 { 00:09:21.626 "name": "BaseBdev3", 00:09:21.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.626 "is_configured": false, 00:09:21.626 "data_offset": 0, 00:09:21.626 "data_size": 0 00:09:21.626 } 00:09:21.626 ] 00:09:21.626 }' 00:09:21.626 02:42:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.626 02:42:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.196 [2024-12-07 02:42:33.078009] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.196 [2024-12-07 02:42:33.078051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.196 [2024-12-07 02:42:33.090021] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.196 [2024-12-07 02:42:33.090393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.196 [2024-12-07 02:42:33.090410] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.196 [2024-12-07 02:42:33.090468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.196 [2024-12-07 02:42:33.090476] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.196 [2024-12-07 02:42:33.090555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.196 [2024-12-07 02:42:33.117045] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.196 BaseBdev1 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.196 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.197 [ 00:09:22.197 { 00:09:22.197 "name": "BaseBdev1", 00:09:22.197 "aliases": [ 00:09:22.197 "03ef0027-3ed4-4e93-9412-b3323b433ac9" 00:09:22.197 ], 00:09:22.197 "product_name": "Malloc disk", 00:09:22.197 "block_size": 512, 00:09:22.197 "num_blocks": 65536, 00:09:22.197 "uuid": "03ef0027-3ed4-4e93-9412-b3323b433ac9", 00:09:22.197 "assigned_rate_limits": { 00:09:22.197 "rw_ios_per_sec": 0, 00:09:22.197 "rw_mbytes_per_sec": 0, 00:09:22.197 "r_mbytes_per_sec": 0, 00:09:22.197 "w_mbytes_per_sec": 0 00:09:22.197 }, 00:09:22.197 "claimed": true, 00:09:22.197 "claim_type": "exclusive_write", 00:09:22.197 "zoned": false, 00:09:22.197 "supported_io_types": { 00:09:22.197 "read": true, 00:09:22.197 "write": true, 00:09:22.197 "unmap": true, 00:09:22.197 "flush": true, 00:09:22.197 "reset": true, 00:09:22.197 "nvme_admin": false, 00:09:22.197 "nvme_io": false, 00:09:22.197 "nvme_io_md": false, 00:09:22.197 "write_zeroes": true, 00:09:22.197 "zcopy": true, 00:09:22.197 "get_zone_info": false, 00:09:22.197 "zone_management": false, 00:09:22.197 "zone_append": false, 00:09:22.197 "compare": false, 00:09:22.197 "compare_and_write": false, 00:09:22.197 "abort": true, 00:09:22.197 "seek_hole": false, 00:09:22.197 "seek_data": false, 00:09:22.197 "copy": true, 00:09:22.197 "nvme_iov_md": false 00:09:22.197 }, 00:09:22.197 "memory_domains": [ 00:09:22.197 { 00:09:22.197 "dma_device_id": "system", 00:09:22.197 "dma_device_type": 1 00:09:22.197 }, 00:09:22.197 { 00:09:22.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.197 "dma_device_type": 2 00:09:22.197 } 00:09:22.197 ], 00:09:22.197 "driver_specific": {} 00:09:22.197 } 00:09:22.197 ] 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.197 "name": "Existed_Raid", 00:09:22.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.197 "strip_size_kb": 0, 00:09:22.197 "state": "configuring", 00:09:22.197 "raid_level": "raid1", 00:09:22.197 "superblock": false, 00:09:22.197 "num_base_bdevs": 3, 00:09:22.197 "num_base_bdevs_discovered": 1, 00:09:22.197 "num_base_bdevs_operational": 3, 00:09:22.197 "base_bdevs_list": [ 00:09:22.197 { 00:09:22.197 "name": "BaseBdev1", 00:09:22.197 "uuid": "03ef0027-3ed4-4e93-9412-b3323b433ac9", 00:09:22.197 "is_configured": true, 00:09:22.197 "data_offset": 0, 00:09:22.197 "data_size": 65536 00:09:22.197 }, 00:09:22.197 { 00:09:22.197 "name": "BaseBdev2", 00:09:22.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.197 "is_configured": false, 00:09:22.197 "data_offset": 0, 00:09:22.197 "data_size": 0 00:09:22.197 }, 00:09:22.197 { 00:09:22.197 "name": "BaseBdev3", 00:09:22.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.197 "is_configured": false, 00:09:22.197 "data_offset": 0, 00:09:22.197 "data_size": 0 00:09:22.197 } 00:09:22.197 ] 00:09:22.197 }' 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.197 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 [2024-12-07 02:42:33.596231] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.768 [2024-12-07 02:42:33.596291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 [2024-12-07 02:42:33.608245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.768 [2024-12-07 02:42:33.610426] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.768 [2024-12-07 02:42:33.610710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.768 [2024-12-07 02:42:33.610724] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.768 [2024-12-07 02:42:33.610738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.768 "name": "Existed_Raid", 00:09:22.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.768 "strip_size_kb": 0, 00:09:22.768 "state": "configuring", 00:09:22.768 "raid_level": "raid1", 00:09:22.768 "superblock": false, 00:09:22.768 "num_base_bdevs": 3, 00:09:22.768 "num_base_bdevs_discovered": 1, 00:09:22.768 "num_base_bdevs_operational": 3, 00:09:22.768 "base_bdevs_list": [ 00:09:22.768 { 00:09:22.768 "name": "BaseBdev1", 00:09:22.768 "uuid": "03ef0027-3ed4-4e93-9412-b3323b433ac9", 00:09:22.768 "is_configured": true, 00:09:22.768 "data_offset": 0, 00:09:22.768 "data_size": 65536 00:09:22.768 }, 00:09:22.768 { 00:09:22.768 "name": "BaseBdev2", 00:09:22.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.768 "is_configured": false, 00:09:22.768 "data_offset": 0, 00:09:22.768 "data_size": 0 00:09:22.768 }, 00:09:22.768 { 00:09:22.768 "name": "BaseBdev3", 00:09:22.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.768 "is_configured": false, 00:09:22.768 "data_offset": 0, 00:09:22.768 "data_size": 0 00:09:22.768 } 00:09:22.768 ] 00:09:22.768 }' 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.768 02:42:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.027 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.028 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.028 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.287 [2024-12-07 02:42:34.109596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.287 BaseBdev2 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.287 [ 00:09:23.287 { 00:09:23.287 "name": "BaseBdev2", 00:09:23.287 "aliases": [ 00:09:23.287 "abc114a4-3daa-4ff7-8338-87a84bf3772d" 00:09:23.287 ], 00:09:23.287 "product_name": "Malloc disk", 00:09:23.287 "block_size": 512, 00:09:23.287 "num_blocks": 65536, 00:09:23.287 "uuid": "abc114a4-3daa-4ff7-8338-87a84bf3772d", 00:09:23.287 "assigned_rate_limits": { 00:09:23.287 "rw_ios_per_sec": 0, 00:09:23.287 "rw_mbytes_per_sec": 0, 00:09:23.287 "r_mbytes_per_sec": 0, 00:09:23.287 "w_mbytes_per_sec": 0 00:09:23.287 }, 00:09:23.287 "claimed": true, 00:09:23.287 "claim_type": "exclusive_write", 00:09:23.287 "zoned": false, 00:09:23.287 "supported_io_types": { 00:09:23.287 "read": true, 00:09:23.287 "write": true, 00:09:23.287 "unmap": true, 00:09:23.287 "flush": true, 00:09:23.287 "reset": true, 00:09:23.287 "nvme_admin": false, 00:09:23.287 "nvme_io": false, 00:09:23.287 "nvme_io_md": false, 00:09:23.287 "write_zeroes": true, 00:09:23.287 "zcopy": true, 00:09:23.287 "get_zone_info": false, 00:09:23.287 "zone_management": false, 00:09:23.287 "zone_append": false, 00:09:23.287 "compare": false, 00:09:23.287 "compare_and_write": false, 00:09:23.287 "abort": true, 00:09:23.287 "seek_hole": false, 00:09:23.287 "seek_data": false, 00:09:23.287 "copy": true, 00:09:23.287 "nvme_iov_md": false 00:09:23.287 }, 00:09:23.287 "memory_domains": [ 00:09:23.287 { 00:09:23.287 "dma_device_id": "system", 00:09:23.287 "dma_device_type": 1 00:09:23.287 }, 00:09:23.287 { 00:09:23.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.287 "dma_device_type": 2 00:09:23.287 } 00:09:23.287 ], 00:09:23.287 "driver_specific": {} 00:09:23.287 } 00:09:23.287 ] 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.287 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.288 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.288 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.288 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.288 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.288 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.288 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.288 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.288 "name": "Existed_Raid", 00:09:23.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.288 "strip_size_kb": 0, 00:09:23.288 "state": "configuring", 00:09:23.288 "raid_level": "raid1", 00:09:23.288 "superblock": false, 00:09:23.288 "num_base_bdevs": 3, 00:09:23.288 "num_base_bdevs_discovered": 2, 00:09:23.288 "num_base_bdevs_operational": 3, 00:09:23.288 "base_bdevs_list": [ 00:09:23.288 { 00:09:23.288 "name": "BaseBdev1", 00:09:23.288 "uuid": "03ef0027-3ed4-4e93-9412-b3323b433ac9", 00:09:23.288 "is_configured": true, 00:09:23.288 "data_offset": 0, 00:09:23.288 "data_size": 65536 00:09:23.288 }, 00:09:23.288 { 00:09:23.288 "name": "BaseBdev2", 00:09:23.288 "uuid": "abc114a4-3daa-4ff7-8338-87a84bf3772d", 00:09:23.288 "is_configured": true, 00:09:23.288 "data_offset": 0, 00:09:23.288 "data_size": 65536 00:09:23.288 }, 00:09:23.288 { 00:09:23.288 "name": "BaseBdev3", 00:09:23.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.288 "is_configured": false, 00:09:23.288 "data_offset": 0, 00:09:23.288 "data_size": 0 00:09:23.288 } 00:09:23.288 ] 00:09:23.288 }' 00:09:23.288 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.288 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.547 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.547 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.547 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.547 [2024-12-07 02:42:34.609520] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.547 [2024-12-07 02:42:34.609574] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:23.547 [2024-12-07 02:42:34.609596] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:23.547 [2024-12-07 02:42:34.609934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:23.547 [2024-12-07 02:42:34.610109] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:23.547 [2024-12-07 02:42:34.610125] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:23.547 [2024-12-07 02:42:34.610349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.547 BaseBdev3 00:09:23.547 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.547 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:23.547 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:23.547 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:23.547 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:23.547 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:23.547 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:23.547 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:23.547 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.547 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.807 [ 00:09:23.807 { 00:09:23.807 "name": "BaseBdev3", 00:09:23.807 "aliases": [ 00:09:23.807 "28ed66bc-8dc6-4123-849f-b2a31dc233e7" 00:09:23.807 ], 00:09:23.807 "product_name": "Malloc disk", 00:09:23.807 "block_size": 512, 00:09:23.807 "num_blocks": 65536, 00:09:23.807 "uuid": "28ed66bc-8dc6-4123-849f-b2a31dc233e7", 00:09:23.807 "assigned_rate_limits": { 00:09:23.807 "rw_ios_per_sec": 0, 00:09:23.807 "rw_mbytes_per_sec": 0, 00:09:23.807 "r_mbytes_per_sec": 0, 00:09:23.807 "w_mbytes_per_sec": 0 00:09:23.807 }, 00:09:23.807 "claimed": true, 00:09:23.807 "claim_type": "exclusive_write", 00:09:23.807 "zoned": false, 00:09:23.807 "supported_io_types": { 00:09:23.807 "read": true, 00:09:23.807 "write": true, 00:09:23.807 "unmap": true, 00:09:23.807 "flush": true, 00:09:23.807 "reset": true, 00:09:23.807 "nvme_admin": false, 00:09:23.807 "nvme_io": false, 00:09:23.807 "nvme_io_md": false, 00:09:23.807 "write_zeroes": true, 00:09:23.807 "zcopy": true, 00:09:23.807 "get_zone_info": false, 00:09:23.807 "zone_management": false, 00:09:23.807 "zone_append": false, 00:09:23.807 "compare": false, 00:09:23.807 "compare_and_write": false, 00:09:23.807 "abort": true, 00:09:23.807 "seek_hole": false, 00:09:23.807 "seek_data": false, 00:09:23.807 "copy": true, 00:09:23.807 "nvme_iov_md": false 00:09:23.807 }, 00:09:23.807 "memory_domains": [ 00:09:23.807 { 00:09:23.807 "dma_device_id": "system", 00:09:23.807 "dma_device_type": 1 00:09:23.807 }, 00:09:23.807 { 00:09:23.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.807 "dma_device_type": 2 00:09:23.807 } 00:09:23.807 ], 00:09:23.807 "driver_specific": {} 00:09:23.807 } 00:09:23.807 ] 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.807 "name": "Existed_Raid", 00:09:23.807 "uuid": "3d153b93-0ba8-4b24-b6cb-666540452342", 00:09:23.807 "strip_size_kb": 0, 00:09:23.807 "state": "online", 00:09:23.807 "raid_level": "raid1", 00:09:23.807 "superblock": false, 00:09:23.807 "num_base_bdevs": 3, 00:09:23.807 "num_base_bdevs_discovered": 3, 00:09:23.807 "num_base_bdevs_operational": 3, 00:09:23.807 "base_bdevs_list": [ 00:09:23.807 { 00:09:23.807 "name": "BaseBdev1", 00:09:23.807 "uuid": "03ef0027-3ed4-4e93-9412-b3323b433ac9", 00:09:23.807 "is_configured": true, 00:09:23.807 "data_offset": 0, 00:09:23.807 "data_size": 65536 00:09:23.807 }, 00:09:23.807 { 00:09:23.807 "name": "BaseBdev2", 00:09:23.807 "uuid": "abc114a4-3daa-4ff7-8338-87a84bf3772d", 00:09:23.807 "is_configured": true, 00:09:23.807 "data_offset": 0, 00:09:23.807 "data_size": 65536 00:09:23.807 }, 00:09:23.807 { 00:09:23.807 "name": "BaseBdev3", 00:09:23.807 "uuid": "28ed66bc-8dc6-4123-849f-b2a31dc233e7", 00:09:23.807 "is_configured": true, 00:09:23.807 "data_offset": 0, 00:09:23.807 "data_size": 65536 00:09:23.807 } 00:09:23.807 ] 00:09:23.807 }' 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.807 02:42:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.066 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:24.066 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:24.066 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.067 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.067 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.067 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.067 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:24.067 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.067 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.067 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.067 [2024-12-07 02:42:35.092984] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.067 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.067 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.067 "name": "Existed_Raid", 00:09:24.067 "aliases": [ 00:09:24.067 "3d153b93-0ba8-4b24-b6cb-666540452342" 00:09:24.067 ], 00:09:24.067 "product_name": "Raid Volume", 00:09:24.067 "block_size": 512, 00:09:24.067 "num_blocks": 65536, 00:09:24.067 "uuid": "3d153b93-0ba8-4b24-b6cb-666540452342", 00:09:24.067 "assigned_rate_limits": { 00:09:24.067 "rw_ios_per_sec": 0, 00:09:24.067 "rw_mbytes_per_sec": 0, 00:09:24.067 "r_mbytes_per_sec": 0, 00:09:24.067 "w_mbytes_per_sec": 0 00:09:24.067 }, 00:09:24.067 "claimed": false, 00:09:24.067 "zoned": false, 00:09:24.067 "supported_io_types": { 00:09:24.067 "read": true, 00:09:24.067 "write": true, 00:09:24.067 "unmap": false, 00:09:24.067 "flush": false, 00:09:24.067 "reset": true, 00:09:24.067 "nvme_admin": false, 00:09:24.067 "nvme_io": false, 00:09:24.067 "nvme_io_md": false, 00:09:24.067 "write_zeroes": true, 00:09:24.067 "zcopy": false, 00:09:24.067 "get_zone_info": false, 00:09:24.067 "zone_management": false, 00:09:24.067 "zone_append": false, 00:09:24.067 "compare": false, 00:09:24.067 "compare_and_write": false, 00:09:24.067 "abort": false, 00:09:24.067 "seek_hole": false, 00:09:24.067 "seek_data": false, 00:09:24.067 "copy": false, 00:09:24.067 "nvme_iov_md": false 00:09:24.067 }, 00:09:24.067 "memory_domains": [ 00:09:24.067 { 00:09:24.067 "dma_device_id": "system", 00:09:24.067 "dma_device_type": 1 00:09:24.067 }, 00:09:24.067 { 00:09:24.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.067 "dma_device_type": 2 00:09:24.067 }, 00:09:24.067 { 00:09:24.067 "dma_device_id": "system", 00:09:24.067 "dma_device_type": 1 00:09:24.067 }, 00:09:24.067 { 00:09:24.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.067 "dma_device_type": 2 00:09:24.067 }, 00:09:24.067 { 00:09:24.067 "dma_device_id": "system", 00:09:24.067 "dma_device_type": 1 00:09:24.067 }, 00:09:24.067 { 00:09:24.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.067 "dma_device_type": 2 00:09:24.067 } 00:09:24.067 ], 00:09:24.067 "driver_specific": { 00:09:24.067 "raid": { 00:09:24.067 "uuid": "3d153b93-0ba8-4b24-b6cb-666540452342", 00:09:24.067 "strip_size_kb": 0, 00:09:24.067 "state": "online", 00:09:24.067 "raid_level": "raid1", 00:09:24.067 "superblock": false, 00:09:24.067 "num_base_bdevs": 3, 00:09:24.067 "num_base_bdevs_discovered": 3, 00:09:24.067 "num_base_bdevs_operational": 3, 00:09:24.067 "base_bdevs_list": [ 00:09:24.067 { 00:09:24.067 "name": "BaseBdev1", 00:09:24.067 "uuid": "03ef0027-3ed4-4e93-9412-b3323b433ac9", 00:09:24.067 "is_configured": true, 00:09:24.067 "data_offset": 0, 00:09:24.067 "data_size": 65536 00:09:24.067 }, 00:09:24.067 { 00:09:24.067 "name": "BaseBdev2", 00:09:24.067 "uuid": "abc114a4-3daa-4ff7-8338-87a84bf3772d", 00:09:24.067 "is_configured": true, 00:09:24.067 "data_offset": 0, 00:09:24.067 "data_size": 65536 00:09:24.067 }, 00:09:24.067 { 00:09:24.067 "name": "BaseBdev3", 00:09:24.067 "uuid": "28ed66bc-8dc6-4123-849f-b2a31dc233e7", 00:09:24.067 "is_configured": true, 00:09:24.067 "data_offset": 0, 00:09:24.067 "data_size": 65536 00:09:24.067 } 00:09:24.067 ] 00:09:24.067 } 00:09:24.067 } 00:09:24.067 }' 00:09:24.067 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.326 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:24.326 BaseBdev2 00:09:24.326 BaseBdev3' 00:09:24.326 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.326 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.326 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.326 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:24.326 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.326 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.326 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.326 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.326 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.326 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.326 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.327 [2024-12-07 02:42:35.344349] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.327 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.587 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.587 "name": "Existed_Raid", 00:09:24.587 "uuid": "3d153b93-0ba8-4b24-b6cb-666540452342", 00:09:24.587 "strip_size_kb": 0, 00:09:24.587 "state": "online", 00:09:24.587 "raid_level": "raid1", 00:09:24.587 "superblock": false, 00:09:24.587 "num_base_bdevs": 3, 00:09:24.587 "num_base_bdevs_discovered": 2, 00:09:24.587 "num_base_bdevs_operational": 2, 00:09:24.587 "base_bdevs_list": [ 00:09:24.587 { 00:09:24.587 "name": null, 00:09:24.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.587 "is_configured": false, 00:09:24.587 "data_offset": 0, 00:09:24.587 "data_size": 65536 00:09:24.587 }, 00:09:24.587 { 00:09:24.587 "name": "BaseBdev2", 00:09:24.587 "uuid": "abc114a4-3daa-4ff7-8338-87a84bf3772d", 00:09:24.587 "is_configured": true, 00:09:24.587 "data_offset": 0, 00:09:24.587 "data_size": 65536 00:09:24.587 }, 00:09:24.587 { 00:09:24.587 "name": "BaseBdev3", 00:09:24.587 "uuid": "28ed66bc-8dc6-4123-849f-b2a31dc233e7", 00:09:24.587 "is_configured": true, 00:09:24.587 "data_offset": 0, 00:09:24.587 "data_size": 65536 00:09:24.587 } 00:09:24.587 ] 00:09:24.587 }' 00:09:24.587 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.587 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.846 [2024-12-07 02:42:35.868053] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.846 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.106 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:25.106 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.106 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:25.106 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.106 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.106 [2024-12-07 02:42:35.944730] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:25.106 [2024-12-07 02:42:35.944838] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.106 [2024-12-07 02:42:35.965832] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.106 [2024-12-07 02:42:35.965884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.106 [2024-12-07 02:42:35.965901] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:25.106 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.106 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.106 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.106 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:25.106 02:42:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.106 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.106 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.106 02:42:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.106 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:25.106 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:25.106 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:25.106 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:25.106 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.106 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.107 BaseBdev2 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.107 [ 00:09:25.107 { 00:09:25.107 "name": "BaseBdev2", 00:09:25.107 "aliases": [ 00:09:25.107 "0072e34f-a4fd-4d0f-8e60-ac8d8b8e6a5a" 00:09:25.107 ], 00:09:25.107 "product_name": "Malloc disk", 00:09:25.107 "block_size": 512, 00:09:25.107 "num_blocks": 65536, 00:09:25.107 "uuid": "0072e34f-a4fd-4d0f-8e60-ac8d8b8e6a5a", 00:09:25.107 "assigned_rate_limits": { 00:09:25.107 "rw_ios_per_sec": 0, 00:09:25.107 "rw_mbytes_per_sec": 0, 00:09:25.107 "r_mbytes_per_sec": 0, 00:09:25.107 "w_mbytes_per_sec": 0 00:09:25.107 }, 00:09:25.107 "claimed": false, 00:09:25.107 "zoned": false, 00:09:25.107 "supported_io_types": { 00:09:25.107 "read": true, 00:09:25.107 "write": true, 00:09:25.107 "unmap": true, 00:09:25.107 "flush": true, 00:09:25.107 "reset": true, 00:09:25.107 "nvme_admin": false, 00:09:25.107 "nvme_io": false, 00:09:25.107 "nvme_io_md": false, 00:09:25.107 "write_zeroes": true, 00:09:25.107 "zcopy": true, 00:09:25.107 "get_zone_info": false, 00:09:25.107 "zone_management": false, 00:09:25.107 "zone_append": false, 00:09:25.107 "compare": false, 00:09:25.107 "compare_and_write": false, 00:09:25.107 "abort": true, 00:09:25.107 "seek_hole": false, 00:09:25.107 "seek_data": false, 00:09:25.107 "copy": true, 00:09:25.107 "nvme_iov_md": false 00:09:25.107 }, 00:09:25.107 "memory_domains": [ 00:09:25.107 { 00:09:25.107 "dma_device_id": "system", 00:09:25.107 "dma_device_type": 1 00:09:25.107 }, 00:09:25.107 { 00:09:25.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.107 "dma_device_type": 2 00:09:25.107 } 00:09:25.107 ], 00:09:25.107 "driver_specific": {} 00:09:25.107 } 00:09:25.107 ] 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.107 BaseBdev3 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.107 [ 00:09:25.107 { 00:09:25.107 "name": "BaseBdev3", 00:09:25.107 "aliases": [ 00:09:25.107 "a968a38e-2323-4e7c-bf50-8c655061ef24" 00:09:25.107 ], 00:09:25.107 "product_name": "Malloc disk", 00:09:25.107 "block_size": 512, 00:09:25.107 "num_blocks": 65536, 00:09:25.107 "uuid": "a968a38e-2323-4e7c-bf50-8c655061ef24", 00:09:25.107 "assigned_rate_limits": { 00:09:25.107 "rw_ios_per_sec": 0, 00:09:25.107 "rw_mbytes_per_sec": 0, 00:09:25.107 "r_mbytes_per_sec": 0, 00:09:25.107 "w_mbytes_per_sec": 0 00:09:25.107 }, 00:09:25.107 "claimed": false, 00:09:25.107 "zoned": false, 00:09:25.107 "supported_io_types": { 00:09:25.107 "read": true, 00:09:25.107 "write": true, 00:09:25.107 "unmap": true, 00:09:25.107 "flush": true, 00:09:25.107 "reset": true, 00:09:25.107 "nvme_admin": false, 00:09:25.107 "nvme_io": false, 00:09:25.107 "nvme_io_md": false, 00:09:25.107 "write_zeroes": true, 00:09:25.107 "zcopy": true, 00:09:25.107 "get_zone_info": false, 00:09:25.107 "zone_management": false, 00:09:25.107 "zone_append": false, 00:09:25.107 "compare": false, 00:09:25.107 "compare_and_write": false, 00:09:25.107 "abort": true, 00:09:25.107 "seek_hole": false, 00:09:25.107 "seek_data": false, 00:09:25.107 "copy": true, 00:09:25.107 "nvme_iov_md": false 00:09:25.107 }, 00:09:25.107 "memory_domains": [ 00:09:25.107 { 00:09:25.107 "dma_device_id": "system", 00:09:25.107 "dma_device_type": 1 00:09:25.107 }, 00:09:25.107 { 00:09:25.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.107 "dma_device_type": 2 00:09:25.107 } 00:09:25.107 ], 00:09:25.107 "driver_specific": {} 00:09:25.107 } 00:09:25.107 ] 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.107 [2024-12-07 02:42:36.126116] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.107 [2024-12-07 02:42:36.126590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.107 [2024-12-07 02:42:36.126672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.107 [2024-12-07 02:42:36.128840] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.107 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.107 "name": "Existed_Raid", 00:09:25.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.107 "strip_size_kb": 0, 00:09:25.108 "state": "configuring", 00:09:25.108 "raid_level": "raid1", 00:09:25.108 "superblock": false, 00:09:25.108 "num_base_bdevs": 3, 00:09:25.108 "num_base_bdevs_discovered": 2, 00:09:25.108 "num_base_bdevs_operational": 3, 00:09:25.108 "base_bdevs_list": [ 00:09:25.108 { 00:09:25.108 "name": "BaseBdev1", 00:09:25.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.108 "is_configured": false, 00:09:25.108 "data_offset": 0, 00:09:25.108 "data_size": 0 00:09:25.108 }, 00:09:25.108 { 00:09:25.108 "name": "BaseBdev2", 00:09:25.108 "uuid": "0072e34f-a4fd-4d0f-8e60-ac8d8b8e6a5a", 00:09:25.108 "is_configured": true, 00:09:25.108 "data_offset": 0, 00:09:25.108 "data_size": 65536 00:09:25.108 }, 00:09:25.108 { 00:09:25.108 "name": "BaseBdev3", 00:09:25.108 "uuid": "a968a38e-2323-4e7c-bf50-8c655061ef24", 00:09:25.108 "is_configured": true, 00:09:25.108 "data_offset": 0, 00:09:25.108 "data_size": 65536 00:09:25.108 } 00:09:25.108 ] 00:09:25.108 }' 00:09:25.108 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.108 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.677 [2024-12-07 02:42:36.589300] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.677 "name": "Existed_Raid", 00:09:25.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.677 "strip_size_kb": 0, 00:09:25.677 "state": "configuring", 00:09:25.677 "raid_level": "raid1", 00:09:25.677 "superblock": false, 00:09:25.677 "num_base_bdevs": 3, 00:09:25.677 "num_base_bdevs_discovered": 1, 00:09:25.677 "num_base_bdevs_operational": 3, 00:09:25.677 "base_bdevs_list": [ 00:09:25.677 { 00:09:25.677 "name": "BaseBdev1", 00:09:25.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.677 "is_configured": false, 00:09:25.677 "data_offset": 0, 00:09:25.677 "data_size": 0 00:09:25.677 }, 00:09:25.677 { 00:09:25.677 "name": null, 00:09:25.677 "uuid": "0072e34f-a4fd-4d0f-8e60-ac8d8b8e6a5a", 00:09:25.677 "is_configured": false, 00:09:25.677 "data_offset": 0, 00:09:25.677 "data_size": 65536 00:09:25.677 }, 00:09:25.677 { 00:09:25.677 "name": "BaseBdev3", 00:09:25.677 "uuid": "a968a38e-2323-4e7c-bf50-8c655061ef24", 00:09:25.677 "is_configured": true, 00:09:25.677 "data_offset": 0, 00:09:25.677 "data_size": 65536 00:09:25.677 } 00:09:25.677 ] 00:09:25.677 }' 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.677 02:42:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.248 [2024-12-07 02:42:37.093315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.248 BaseBdev1 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.248 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.248 [ 00:09:26.248 { 00:09:26.248 "name": "BaseBdev1", 00:09:26.248 "aliases": [ 00:09:26.248 "9152be1c-5019-45cc-8225-5376833f6ffa" 00:09:26.248 ], 00:09:26.248 "product_name": "Malloc disk", 00:09:26.248 "block_size": 512, 00:09:26.248 "num_blocks": 65536, 00:09:26.248 "uuid": "9152be1c-5019-45cc-8225-5376833f6ffa", 00:09:26.248 "assigned_rate_limits": { 00:09:26.248 "rw_ios_per_sec": 0, 00:09:26.248 "rw_mbytes_per_sec": 0, 00:09:26.248 "r_mbytes_per_sec": 0, 00:09:26.248 "w_mbytes_per_sec": 0 00:09:26.248 }, 00:09:26.248 "claimed": true, 00:09:26.248 "claim_type": "exclusive_write", 00:09:26.248 "zoned": false, 00:09:26.248 "supported_io_types": { 00:09:26.248 "read": true, 00:09:26.248 "write": true, 00:09:26.248 "unmap": true, 00:09:26.248 "flush": true, 00:09:26.248 "reset": true, 00:09:26.248 "nvme_admin": false, 00:09:26.248 "nvme_io": false, 00:09:26.248 "nvme_io_md": false, 00:09:26.248 "write_zeroes": true, 00:09:26.248 "zcopy": true, 00:09:26.248 "get_zone_info": false, 00:09:26.248 "zone_management": false, 00:09:26.248 "zone_append": false, 00:09:26.248 "compare": false, 00:09:26.248 "compare_and_write": false, 00:09:26.248 "abort": true, 00:09:26.248 "seek_hole": false, 00:09:26.248 "seek_data": false, 00:09:26.248 "copy": true, 00:09:26.248 "nvme_iov_md": false 00:09:26.248 }, 00:09:26.248 "memory_domains": [ 00:09:26.248 { 00:09:26.248 "dma_device_id": "system", 00:09:26.248 "dma_device_type": 1 00:09:26.248 }, 00:09:26.248 { 00:09:26.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.248 "dma_device_type": 2 00:09:26.248 } 00:09:26.248 ], 00:09:26.249 "driver_specific": {} 00:09:26.249 } 00:09:26.249 ] 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.249 "name": "Existed_Raid", 00:09:26.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.249 "strip_size_kb": 0, 00:09:26.249 "state": "configuring", 00:09:26.249 "raid_level": "raid1", 00:09:26.249 "superblock": false, 00:09:26.249 "num_base_bdevs": 3, 00:09:26.249 "num_base_bdevs_discovered": 2, 00:09:26.249 "num_base_bdevs_operational": 3, 00:09:26.249 "base_bdevs_list": [ 00:09:26.249 { 00:09:26.249 "name": "BaseBdev1", 00:09:26.249 "uuid": "9152be1c-5019-45cc-8225-5376833f6ffa", 00:09:26.249 "is_configured": true, 00:09:26.249 "data_offset": 0, 00:09:26.249 "data_size": 65536 00:09:26.249 }, 00:09:26.249 { 00:09:26.249 "name": null, 00:09:26.249 "uuid": "0072e34f-a4fd-4d0f-8e60-ac8d8b8e6a5a", 00:09:26.249 "is_configured": false, 00:09:26.249 "data_offset": 0, 00:09:26.249 "data_size": 65536 00:09:26.249 }, 00:09:26.249 { 00:09:26.249 "name": "BaseBdev3", 00:09:26.249 "uuid": "a968a38e-2323-4e7c-bf50-8c655061ef24", 00:09:26.249 "is_configured": true, 00:09:26.249 "data_offset": 0, 00:09:26.249 "data_size": 65536 00:09:26.249 } 00:09:26.249 ] 00:09:26.249 }' 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.249 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.509 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.509 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.509 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.509 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:26.509 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.770 [2024-12-07 02:42:37.596498] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.770 "name": "Existed_Raid", 00:09:26.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.770 "strip_size_kb": 0, 00:09:26.770 "state": "configuring", 00:09:26.770 "raid_level": "raid1", 00:09:26.770 "superblock": false, 00:09:26.770 "num_base_bdevs": 3, 00:09:26.770 "num_base_bdevs_discovered": 1, 00:09:26.770 "num_base_bdevs_operational": 3, 00:09:26.770 "base_bdevs_list": [ 00:09:26.770 { 00:09:26.770 "name": "BaseBdev1", 00:09:26.770 "uuid": "9152be1c-5019-45cc-8225-5376833f6ffa", 00:09:26.770 "is_configured": true, 00:09:26.770 "data_offset": 0, 00:09:26.770 "data_size": 65536 00:09:26.770 }, 00:09:26.770 { 00:09:26.770 "name": null, 00:09:26.770 "uuid": "0072e34f-a4fd-4d0f-8e60-ac8d8b8e6a5a", 00:09:26.770 "is_configured": false, 00:09:26.770 "data_offset": 0, 00:09:26.770 "data_size": 65536 00:09:26.770 }, 00:09:26.770 { 00:09:26.770 "name": null, 00:09:26.770 "uuid": "a968a38e-2323-4e7c-bf50-8c655061ef24", 00:09:26.770 "is_configured": false, 00:09:26.770 "data_offset": 0, 00:09:26.770 "data_size": 65536 00:09:26.770 } 00:09:26.770 ] 00:09:26.770 }' 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.770 02:42:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.031 [2024-12-07 02:42:38.059747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.031 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.291 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.291 "name": "Existed_Raid", 00:09:27.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.291 "strip_size_kb": 0, 00:09:27.291 "state": "configuring", 00:09:27.291 "raid_level": "raid1", 00:09:27.291 "superblock": false, 00:09:27.291 "num_base_bdevs": 3, 00:09:27.291 "num_base_bdevs_discovered": 2, 00:09:27.291 "num_base_bdevs_operational": 3, 00:09:27.291 "base_bdevs_list": [ 00:09:27.291 { 00:09:27.291 "name": "BaseBdev1", 00:09:27.291 "uuid": "9152be1c-5019-45cc-8225-5376833f6ffa", 00:09:27.291 "is_configured": true, 00:09:27.291 "data_offset": 0, 00:09:27.291 "data_size": 65536 00:09:27.291 }, 00:09:27.291 { 00:09:27.291 "name": null, 00:09:27.291 "uuid": "0072e34f-a4fd-4d0f-8e60-ac8d8b8e6a5a", 00:09:27.291 "is_configured": false, 00:09:27.291 "data_offset": 0, 00:09:27.291 "data_size": 65536 00:09:27.291 }, 00:09:27.291 { 00:09:27.291 "name": "BaseBdev3", 00:09:27.291 "uuid": "a968a38e-2323-4e7c-bf50-8c655061ef24", 00:09:27.291 "is_configured": true, 00:09:27.291 "data_offset": 0, 00:09:27.291 "data_size": 65536 00:09:27.291 } 00:09:27.291 ] 00:09:27.291 }' 00:09:27.291 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.291 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.552 [2024-12-07 02:42:38.550898] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.552 "name": "Existed_Raid", 00:09:27.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.552 "strip_size_kb": 0, 00:09:27.552 "state": "configuring", 00:09:27.552 "raid_level": "raid1", 00:09:27.552 "superblock": false, 00:09:27.552 "num_base_bdevs": 3, 00:09:27.552 "num_base_bdevs_discovered": 1, 00:09:27.552 "num_base_bdevs_operational": 3, 00:09:27.552 "base_bdevs_list": [ 00:09:27.552 { 00:09:27.552 "name": null, 00:09:27.552 "uuid": "9152be1c-5019-45cc-8225-5376833f6ffa", 00:09:27.552 "is_configured": false, 00:09:27.552 "data_offset": 0, 00:09:27.552 "data_size": 65536 00:09:27.552 }, 00:09:27.552 { 00:09:27.552 "name": null, 00:09:27.552 "uuid": "0072e34f-a4fd-4d0f-8e60-ac8d8b8e6a5a", 00:09:27.552 "is_configured": false, 00:09:27.552 "data_offset": 0, 00:09:27.552 "data_size": 65536 00:09:27.552 }, 00:09:27.552 { 00:09:27.552 "name": "BaseBdev3", 00:09:27.552 "uuid": "a968a38e-2323-4e7c-bf50-8c655061ef24", 00:09:27.552 "is_configured": true, 00:09:27.552 "data_offset": 0, 00:09:27.552 "data_size": 65536 00:09:27.552 } 00:09:27.552 ] 00:09:27.552 }' 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.552 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.122 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.122 02:42:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.122 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.122 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.122 02:42:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.122 [2024-12-07 02:42:39.037969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.122 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.122 "name": "Existed_Raid", 00:09:28.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.122 "strip_size_kb": 0, 00:09:28.122 "state": "configuring", 00:09:28.122 "raid_level": "raid1", 00:09:28.122 "superblock": false, 00:09:28.122 "num_base_bdevs": 3, 00:09:28.122 "num_base_bdevs_discovered": 2, 00:09:28.122 "num_base_bdevs_operational": 3, 00:09:28.122 "base_bdevs_list": [ 00:09:28.122 { 00:09:28.122 "name": null, 00:09:28.122 "uuid": "9152be1c-5019-45cc-8225-5376833f6ffa", 00:09:28.122 "is_configured": false, 00:09:28.122 "data_offset": 0, 00:09:28.122 "data_size": 65536 00:09:28.122 }, 00:09:28.122 { 00:09:28.122 "name": "BaseBdev2", 00:09:28.122 "uuid": "0072e34f-a4fd-4d0f-8e60-ac8d8b8e6a5a", 00:09:28.122 "is_configured": true, 00:09:28.122 "data_offset": 0, 00:09:28.122 "data_size": 65536 00:09:28.122 }, 00:09:28.122 { 00:09:28.122 "name": "BaseBdev3", 00:09:28.123 "uuid": "a968a38e-2323-4e7c-bf50-8c655061ef24", 00:09:28.123 "is_configured": true, 00:09:28.123 "data_offset": 0, 00:09:28.123 "data_size": 65536 00:09:28.123 } 00:09:28.123 ] 00:09:28.123 }' 00:09:28.123 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.123 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9152be1c-5019-45cc-8225-5376833f6ffa 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.694 NewBaseBdev 00:09:28.694 [2024-12-07 02:42:39.589645] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:28.694 [2024-12-07 02:42:39.589697] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:28.694 [2024-12-07 02:42:39.589705] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:28.694 [2024-12-07 02:42:39.589986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:28.694 [2024-12-07 02:42:39.590136] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:28.694 [2024-12-07 02:42:39.590151] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:28.694 [2024-12-07 02:42:39.590349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.694 [ 00:09:28.694 { 00:09:28.694 "name": "NewBaseBdev", 00:09:28.694 "aliases": [ 00:09:28.694 "9152be1c-5019-45cc-8225-5376833f6ffa" 00:09:28.694 ], 00:09:28.694 "product_name": "Malloc disk", 00:09:28.694 "block_size": 512, 00:09:28.694 "num_blocks": 65536, 00:09:28.694 "uuid": "9152be1c-5019-45cc-8225-5376833f6ffa", 00:09:28.694 "assigned_rate_limits": { 00:09:28.694 "rw_ios_per_sec": 0, 00:09:28.694 "rw_mbytes_per_sec": 0, 00:09:28.694 "r_mbytes_per_sec": 0, 00:09:28.694 "w_mbytes_per_sec": 0 00:09:28.694 }, 00:09:28.694 "claimed": true, 00:09:28.694 "claim_type": "exclusive_write", 00:09:28.694 "zoned": false, 00:09:28.694 "supported_io_types": { 00:09:28.694 "read": true, 00:09:28.694 "write": true, 00:09:28.694 "unmap": true, 00:09:28.694 "flush": true, 00:09:28.694 "reset": true, 00:09:28.694 "nvme_admin": false, 00:09:28.694 "nvme_io": false, 00:09:28.694 "nvme_io_md": false, 00:09:28.694 "write_zeroes": true, 00:09:28.694 "zcopy": true, 00:09:28.694 "get_zone_info": false, 00:09:28.694 "zone_management": false, 00:09:28.694 "zone_append": false, 00:09:28.694 "compare": false, 00:09:28.694 "compare_and_write": false, 00:09:28.694 "abort": true, 00:09:28.694 "seek_hole": false, 00:09:28.694 "seek_data": false, 00:09:28.694 "copy": true, 00:09:28.694 "nvme_iov_md": false 00:09:28.694 }, 00:09:28.694 "memory_domains": [ 00:09:28.694 { 00:09:28.694 "dma_device_id": "system", 00:09:28.694 "dma_device_type": 1 00:09:28.694 }, 00:09:28.694 { 00:09:28.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.694 "dma_device_type": 2 00:09:28.694 } 00:09:28.694 ], 00:09:28.694 "driver_specific": {} 00:09:28.694 } 00:09:28.694 ] 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.694 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.695 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.695 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.695 "name": "Existed_Raid", 00:09:28.695 "uuid": "884dfb89-f786-42e5-b26e-69afcb6832a1", 00:09:28.695 "strip_size_kb": 0, 00:09:28.695 "state": "online", 00:09:28.695 "raid_level": "raid1", 00:09:28.695 "superblock": false, 00:09:28.695 "num_base_bdevs": 3, 00:09:28.695 "num_base_bdevs_discovered": 3, 00:09:28.695 "num_base_bdevs_operational": 3, 00:09:28.695 "base_bdevs_list": [ 00:09:28.695 { 00:09:28.695 "name": "NewBaseBdev", 00:09:28.695 "uuid": "9152be1c-5019-45cc-8225-5376833f6ffa", 00:09:28.695 "is_configured": true, 00:09:28.695 "data_offset": 0, 00:09:28.695 "data_size": 65536 00:09:28.695 }, 00:09:28.695 { 00:09:28.695 "name": "BaseBdev2", 00:09:28.695 "uuid": "0072e34f-a4fd-4d0f-8e60-ac8d8b8e6a5a", 00:09:28.695 "is_configured": true, 00:09:28.695 "data_offset": 0, 00:09:28.695 "data_size": 65536 00:09:28.695 }, 00:09:28.695 { 00:09:28.695 "name": "BaseBdev3", 00:09:28.695 "uuid": "a968a38e-2323-4e7c-bf50-8c655061ef24", 00:09:28.695 "is_configured": true, 00:09:28.695 "data_offset": 0, 00:09:28.695 "data_size": 65536 00:09:28.695 } 00:09:28.695 ] 00:09:28.695 }' 00:09:28.695 02:42:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.695 02:42:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.955 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:28.955 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:28.955 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:28.955 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:28.955 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:28.955 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:28.955 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:28.955 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.955 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.955 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.217 [2024-12-07 02:42:40.033154] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.217 "name": "Existed_Raid", 00:09:29.217 "aliases": [ 00:09:29.217 "884dfb89-f786-42e5-b26e-69afcb6832a1" 00:09:29.217 ], 00:09:29.217 "product_name": "Raid Volume", 00:09:29.217 "block_size": 512, 00:09:29.217 "num_blocks": 65536, 00:09:29.217 "uuid": "884dfb89-f786-42e5-b26e-69afcb6832a1", 00:09:29.217 "assigned_rate_limits": { 00:09:29.217 "rw_ios_per_sec": 0, 00:09:29.217 "rw_mbytes_per_sec": 0, 00:09:29.217 "r_mbytes_per_sec": 0, 00:09:29.217 "w_mbytes_per_sec": 0 00:09:29.217 }, 00:09:29.217 "claimed": false, 00:09:29.217 "zoned": false, 00:09:29.217 "supported_io_types": { 00:09:29.217 "read": true, 00:09:29.217 "write": true, 00:09:29.217 "unmap": false, 00:09:29.217 "flush": false, 00:09:29.217 "reset": true, 00:09:29.217 "nvme_admin": false, 00:09:29.217 "nvme_io": false, 00:09:29.217 "nvme_io_md": false, 00:09:29.217 "write_zeroes": true, 00:09:29.217 "zcopy": false, 00:09:29.217 "get_zone_info": false, 00:09:29.217 "zone_management": false, 00:09:29.217 "zone_append": false, 00:09:29.217 "compare": false, 00:09:29.217 "compare_and_write": false, 00:09:29.217 "abort": false, 00:09:29.217 "seek_hole": false, 00:09:29.217 "seek_data": false, 00:09:29.217 "copy": false, 00:09:29.217 "nvme_iov_md": false 00:09:29.217 }, 00:09:29.217 "memory_domains": [ 00:09:29.217 { 00:09:29.217 "dma_device_id": "system", 00:09:29.217 "dma_device_type": 1 00:09:29.217 }, 00:09:29.217 { 00:09:29.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.217 "dma_device_type": 2 00:09:29.217 }, 00:09:29.217 { 00:09:29.217 "dma_device_id": "system", 00:09:29.217 "dma_device_type": 1 00:09:29.217 }, 00:09:29.217 { 00:09:29.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.217 "dma_device_type": 2 00:09:29.217 }, 00:09:29.217 { 00:09:29.217 "dma_device_id": "system", 00:09:29.217 "dma_device_type": 1 00:09:29.217 }, 00:09:29.217 { 00:09:29.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.217 "dma_device_type": 2 00:09:29.217 } 00:09:29.217 ], 00:09:29.217 "driver_specific": { 00:09:29.217 "raid": { 00:09:29.217 "uuid": "884dfb89-f786-42e5-b26e-69afcb6832a1", 00:09:29.217 "strip_size_kb": 0, 00:09:29.217 "state": "online", 00:09:29.217 "raid_level": "raid1", 00:09:29.217 "superblock": false, 00:09:29.217 "num_base_bdevs": 3, 00:09:29.217 "num_base_bdevs_discovered": 3, 00:09:29.217 "num_base_bdevs_operational": 3, 00:09:29.217 "base_bdevs_list": [ 00:09:29.217 { 00:09:29.217 "name": "NewBaseBdev", 00:09:29.217 "uuid": "9152be1c-5019-45cc-8225-5376833f6ffa", 00:09:29.217 "is_configured": true, 00:09:29.217 "data_offset": 0, 00:09:29.217 "data_size": 65536 00:09:29.217 }, 00:09:29.217 { 00:09:29.217 "name": "BaseBdev2", 00:09:29.217 "uuid": "0072e34f-a4fd-4d0f-8e60-ac8d8b8e6a5a", 00:09:29.217 "is_configured": true, 00:09:29.217 "data_offset": 0, 00:09:29.217 "data_size": 65536 00:09:29.217 }, 00:09:29.217 { 00:09:29.217 "name": "BaseBdev3", 00:09:29.217 "uuid": "a968a38e-2323-4e7c-bf50-8c655061ef24", 00:09:29.217 "is_configured": true, 00:09:29.217 "data_offset": 0, 00:09:29.217 "data_size": 65536 00:09:29.217 } 00:09:29.217 ] 00:09:29.217 } 00:09:29.217 } 00:09:29.217 }' 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:29.217 BaseBdev2 00:09:29.217 BaseBdev3' 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.217 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.478 [2024-12-07 02:42:40.296444] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.478 [2024-12-07 02:42:40.296476] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.478 [2024-12-07 02:42:40.296552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.478 [2024-12-07 02:42:40.296861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.478 [2024-12-07 02:42:40.296878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78672 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78672 ']' 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78672 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78672 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:29.478 killing process with pid 78672 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78672' 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78672 00:09:29.478 [2024-12-07 02:42:40.346071] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.478 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78672 00:09:29.478 [2024-12-07 02:42:40.405310] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.738 ************************************ 00:09:29.738 END TEST raid_state_function_test 00:09:29.738 ************************************ 00:09:29.738 02:42:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:29.738 00:09:29.738 real 0m9.059s 00:09:29.738 user 0m15.172s 00:09:29.738 sys 0m1.905s 00:09:29.738 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.738 02:42:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.998 02:42:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:29.998 02:42:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:29.998 02:42:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.998 02:42:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.998 ************************************ 00:09:29.998 START TEST raid_state_function_test_sb 00:09:29.998 ************************************ 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=79282 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79282' 00:09:29.998 Process raid pid: 79282 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 79282 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 79282 ']' 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.998 02:42:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.998 [2024-12-07 02:42:40.942329] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:29.998 [2024-12-07 02:42:40.942462] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.258 [2024-12-07 02:42:41.101032] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.259 [2024-12-07 02:42:41.171380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.259 [2024-12-07 02:42:41.248239] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.259 [2024-12-07 02:42:41.248279] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.828 [2024-12-07 02:42:41.775648] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:30.828 [2024-12-07 02:42:41.775699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:30.828 [2024-12-07 02:42:41.775711] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.828 [2024-12-07 02:42:41.775721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.828 [2024-12-07 02:42:41.775727] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.828 [2024-12-07 02:42:41.775739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.828 "name": "Existed_Raid", 00:09:30.828 "uuid": "3ad099d7-7c03-438d-93b2-c289e9840716", 00:09:30.828 "strip_size_kb": 0, 00:09:30.828 "state": "configuring", 00:09:30.828 "raid_level": "raid1", 00:09:30.828 "superblock": true, 00:09:30.828 "num_base_bdevs": 3, 00:09:30.828 "num_base_bdevs_discovered": 0, 00:09:30.828 "num_base_bdevs_operational": 3, 00:09:30.828 "base_bdevs_list": [ 00:09:30.828 { 00:09:30.828 "name": "BaseBdev1", 00:09:30.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.828 "is_configured": false, 00:09:30.828 "data_offset": 0, 00:09:30.828 "data_size": 0 00:09:30.828 }, 00:09:30.828 { 00:09:30.828 "name": "BaseBdev2", 00:09:30.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.828 "is_configured": false, 00:09:30.828 "data_offset": 0, 00:09:30.828 "data_size": 0 00:09:30.828 }, 00:09:30.828 { 00:09:30.828 "name": "BaseBdev3", 00:09:30.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.828 "is_configured": false, 00:09:30.828 "data_offset": 0, 00:09:30.828 "data_size": 0 00:09:30.828 } 00:09:30.828 ] 00:09:30.828 }' 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.828 02:42:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.402 [2024-12-07 02:42:42.234737] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.402 [2024-12-07 02:42:42.234830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.402 [2024-12-07 02:42:42.246757] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.402 [2024-12-07 02:42:42.246829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.402 [2024-12-07 02:42:42.246855] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.402 [2024-12-07 02:42:42.246877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.402 [2024-12-07 02:42:42.246894] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.402 [2024-12-07 02:42:42.246915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.402 [2024-12-07 02:42:42.273760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.402 BaseBdev1 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.402 [ 00:09:31.402 { 00:09:31.402 "name": "BaseBdev1", 00:09:31.402 "aliases": [ 00:09:31.402 "7ac7c1b4-056e-4030-8c6c-fc0b0e646d3c" 00:09:31.402 ], 00:09:31.402 "product_name": "Malloc disk", 00:09:31.402 "block_size": 512, 00:09:31.402 "num_blocks": 65536, 00:09:31.402 "uuid": "7ac7c1b4-056e-4030-8c6c-fc0b0e646d3c", 00:09:31.402 "assigned_rate_limits": { 00:09:31.402 "rw_ios_per_sec": 0, 00:09:31.402 "rw_mbytes_per_sec": 0, 00:09:31.402 "r_mbytes_per_sec": 0, 00:09:31.402 "w_mbytes_per_sec": 0 00:09:31.402 }, 00:09:31.402 "claimed": true, 00:09:31.402 "claim_type": "exclusive_write", 00:09:31.402 "zoned": false, 00:09:31.402 "supported_io_types": { 00:09:31.402 "read": true, 00:09:31.402 "write": true, 00:09:31.402 "unmap": true, 00:09:31.402 "flush": true, 00:09:31.402 "reset": true, 00:09:31.402 "nvme_admin": false, 00:09:31.402 "nvme_io": false, 00:09:31.402 "nvme_io_md": false, 00:09:31.402 "write_zeroes": true, 00:09:31.402 "zcopy": true, 00:09:31.402 "get_zone_info": false, 00:09:31.402 "zone_management": false, 00:09:31.402 "zone_append": false, 00:09:31.402 "compare": false, 00:09:31.402 "compare_and_write": false, 00:09:31.402 "abort": true, 00:09:31.402 "seek_hole": false, 00:09:31.402 "seek_data": false, 00:09:31.402 "copy": true, 00:09:31.402 "nvme_iov_md": false 00:09:31.402 }, 00:09:31.402 "memory_domains": [ 00:09:31.402 { 00:09:31.402 "dma_device_id": "system", 00:09:31.402 "dma_device_type": 1 00:09:31.402 }, 00:09:31.402 { 00:09:31.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.402 "dma_device_type": 2 00:09:31.402 } 00:09:31.402 ], 00:09:31.402 "driver_specific": {} 00:09:31.402 } 00:09:31.402 ] 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:31.402 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.403 "name": "Existed_Raid", 00:09:31.403 "uuid": "cdb581d5-c8bd-4efb-ad58-69d531a82e12", 00:09:31.403 "strip_size_kb": 0, 00:09:31.403 "state": "configuring", 00:09:31.403 "raid_level": "raid1", 00:09:31.403 "superblock": true, 00:09:31.403 "num_base_bdevs": 3, 00:09:31.403 "num_base_bdevs_discovered": 1, 00:09:31.403 "num_base_bdevs_operational": 3, 00:09:31.403 "base_bdevs_list": [ 00:09:31.403 { 00:09:31.403 "name": "BaseBdev1", 00:09:31.403 "uuid": "7ac7c1b4-056e-4030-8c6c-fc0b0e646d3c", 00:09:31.403 "is_configured": true, 00:09:31.403 "data_offset": 2048, 00:09:31.403 "data_size": 63488 00:09:31.403 }, 00:09:31.403 { 00:09:31.403 "name": "BaseBdev2", 00:09:31.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.403 "is_configured": false, 00:09:31.403 "data_offset": 0, 00:09:31.403 "data_size": 0 00:09:31.403 }, 00:09:31.403 { 00:09:31.403 "name": "BaseBdev3", 00:09:31.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.403 "is_configured": false, 00:09:31.403 "data_offset": 0, 00:09:31.403 "data_size": 0 00:09:31.403 } 00:09:31.403 ] 00:09:31.403 }' 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.403 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.662 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.662 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.662 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.662 [2024-12-07 02:42:42.721005] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.662 [2024-12-07 02:42:42.721055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:31.662 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.662 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.662 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.662 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.662 [2024-12-07 02:42:42.733025] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.662 [2024-12-07 02:42:42.735144] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.662 [2024-12-07 02:42:42.735187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.662 [2024-12-07 02:42:42.735196] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.662 [2024-12-07 02:42:42.735207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.921 "name": "Existed_Raid", 00:09:31.921 "uuid": "69813a3f-b23e-45ac-a447-e5bfb9b3f9c1", 00:09:31.921 "strip_size_kb": 0, 00:09:31.921 "state": "configuring", 00:09:31.921 "raid_level": "raid1", 00:09:31.921 "superblock": true, 00:09:31.921 "num_base_bdevs": 3, 00:09:31.921 "num_base_bdevs_discovered": 1, 00:09:31.921 "num_base_bdevs_operational": 3, 00:09:31.921 "base_bdevs_list": [ 00:09:31.921 { 00:09:31.921 "name": "BaseBdev1", 00:09:31.921 "uuid": "7ac7c1b4-056e-4030-8c6c-fc0b0e646d3c", 00:09:31.921 "is_configured": true, 00:09:31.921 "data_offset": 2048, 00:09:31.921 "data_size": 63488 00:09:31.921 }, 00:09:31.921 { 00:09:31.921 "name": "BaseBdev2", 00:09:31.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.921 "is_configured": false, 00:09:31.921 "data_offset": 0, 00:09:31.921 "data_size": 0 00:09:31.921 }, 00:09:31.921 { 00:09:31.921 "name": "BaseBdev3", 00:09:31.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.921 "is_configured": false, 00:09:31.921 "data_offset": 0, 00:09:31.921 "data_size": 0 00:09:31.921 } 00:09:31.921 ] 00:09:31.921 }' 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.921 02:42:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.180 [2024-12-07 02:42:43.206559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.180 BaseBdev2 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.180 [ 00:09:32.180 { 00:09:32.180 "name": "BaseBdev2", 00:09:32.180 "aliases": [ 00:09:32.180 "51317333-64c4-4eda-86bf-bec14889d162" 00:09:32.180 ], 00:09:32.180 "product_name": "Malloc disk", 00:09:32.180 "block_size": 512, 00:09:32.180 "num_blocks": 65536, 00:09:32.180 "uuid": "51317333-64c4-4eda-86bf-bec14889d162", 00:09:32.180 "assigned_rate_limits": { 00:09:32.180 "rw_ios_per_sec": 0, 00:09:32.180 "rw_mbytes_per_sec": 0, 00:09:32.180 "r_mbytes_per_sec": 0, 00:09:32.180 "w_mbytes_per_sec": 0 00:09:32.180 }, 00:09:32.180 "claimed": true, 00:09:32.180 "claim_type": "exclusive_write", 00:09:32.180 "zoned": false, 00:09:32.180 "supported_io_types": { 00:09:32.180 "read": true, 00:09:32.180 "write": true, 00:09:32.180 "unmap": true, 00:09:32.180 "flush": true, 00:09:32.180 "reset": true, 00:09:32.180 "nvme_admin": false, 00:09:32.180 "nvme_io": false, 00:09:32.180 "nvme_io_md": false, 00:09:32.180 "write_zeroes": true, 00:09:32.180 "zcopy": true, 00:09:32.180 "get_zone_info": false, 00:09:32.180 "zone_management": false, 00:09:32.180 "zone_append": false, 00:09:32.180 "compare": false, 00:09:32.180 "compare_and_write": false, 00:09:32.180 "abort": true, 00:09:32.180 "seek_hole": false, 00:09:32.180 "seek_data": false, 00:09:32.180 "copy": true, 00:09:32.180 "nvme_iov_md": false 00:09:32.180 }, 00:09:32.180 "memory_domains": [ 00:09:32.180 { 00:09:32.180 "dma_device_id": "system", 00:09:32.180 "dma_device_type": 1 00:09:32.180 }, 00:09:32.180 { 00:09:32.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.180 "dma_device_type": 2 00:09:32.180 } 00:09:32.180 ], 00:09:32.180 "driver_specific": {} 00:09:32.180 } 00:09:32.180 ] 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.180 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.438 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.438 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.438 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.438 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.438 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.438 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.438 "name": "Existed_Raid", 00:09:32.438 "uuid": "69813a3f-b23e-45ac-a447-e5bfb9b3f9c1", 00:09:32.438 "strip_size_kb": 0, 00:09:32.438 "state": "configuring", 00:09:32.438 "raid_level": "raid1", 00:09:32.438 "superblock": true, 00:09:32.438 "num_base_bdevs": 3, 00:09:32.438 "num_base_bdevs_discovered": 2, 00:09:32.438 "num_base_bdevs_operational": 3, 00:09:32.438 "base_bdevs_list": [ 00:09:32.438 { 00:09:32.438 "name": "BaseBdev1", 00:09:32.439 "uuid": "7ac7c1b4-056e-4030-8c6c-fc0b0e646d3c", 00:09:32.439 "is_configured": true, 00:09:32.439 "data_offset": 2048, 00:09:32.439 "data_size": 63488 00:09:32.439 }, 00:09:32.439 { 00:09:32.439 "name": "BaseBdev2", 00:09:32.439 "uuid": "51317333-64c4-4eda-86bf-bec14889d162", 00:09:32.439 "is_configured": true, 00:09:32.439 "data_offset": 2048, 00:09:32.439 "data_size": 63488 00:09:32.439 }, 00:09:32.439 { 00:09:32.439 "name": "BaseBdev3", 00:09:32.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.439 "is_configured": false, 00:09:32.439 "data_offset": 0, 00:09:32.439 "data_size": 0 00:09:32.439 } 00:09:32.439 ] 00:09:32.439 }' 00:09:32.439 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.439 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.698 [2024-12-07 02:42:43.686577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.698 [2024-12-07 02:42:43.686881] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:32.698 [2024-12-07 02:42:43.686946] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:32.698 [2024-12-07 02:42:43.687279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:32.698 BaseBdev3 00:09:32.698 [2024-12-07 02:42:43.687499] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:32.698 [2024-12-07 02:42:43.687549] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:32.698 [2024-12-07 02:42:43.687771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.698 [ 00:09:32.698 { 00:09:32.698 "name": "BaseBdev3", 00:09:32.698 "aliases": [ 00:09:32.698 "3d7c3451-be9d-4120-b1e5-d727144df2d2" 00:09:32.698 ], 00:09:32.698 "product_name": "Malloc disk", 00:09:32.698 "block_size": 512, 00:09:32.698 "num_blocks": 65536, 00:09:32.698 "uuid": "3d7c3451-be9d-4120-b1e5-d727144df2d2", 00:09:32.698 "assigned_rate_limits": { 00:09:32.698 "rw_ios_per_sec": 0, 00:09:32.698 "rw_mbytes_per_sec": 0, 00:09:32.698 "r_mbytes_per_sec": 0, 00:09:32.698 "w_mbytes_per_sec": 0 00:09:32.698 }, 00:09:32.698 "claimed": true, 00:09:32.698 "claim_type": "exclusive_write", 00:09:32.698 "zoned": false, 00:09:32.698 "supported_io_types": { 00:09:32.698 "read": true, 00:09:32.698 "write": true, 00:09:32.698 "unmap": true, 00:09:32.698 "flush": true, 00:09:32.698 "reset": true, 00:09:32.698 "nvme_admin": false, 00:09:32.698 "nvme_io": false, 00:09:32.698 "nvme_io_md": false, 00:09:32.698 "write_zeroes": true, 00:09:32.698 "zcopy": true, 00:09:32.698 "get_zone_info": false, 00:09:32.698 "zone_management": false, 00:09:32.698 "zone_append": false, 00:09:32.698 "compare": false, 00:09:32.698 "compare_and_write": false, 00:09:32.698 "abort": true, 00:09:32.698 "seek_hole": false, 00:09:32.698 "seek_data": false, 00:09:32.698 "copy": true, 00:09:32.698 "nvme_iov_md": false 00:09:32.698 }, 00:09:32.698 "memory_domains": [ 00:09:32.698 { 00:09:32.698 "dma_device_id": "system", 00:09:32.698 "dma_device_type": 1 00:09:32.698 }, 00:09:32.698 { 00:09:32.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.698 "dma_device_type": 2 00:09:32.698 } 00:09:32.698 ], 00:09:32.698 "driver_specific": {} 00:09:32.698 } 00:09:32.698 ] 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.698 "name": "Existed_Raid", 00:09:32.698 "uuid": "69813a3f-b23e-45ac-a447-e5bfb9b3f9c1", 00:09:32.698 "strip_size_kb": 0, 00:09:32.698 "state": "online", 00:09:32.698 "raid_level": "raid1", 00:09:32.698 "superblock": true, 00:09:32.698 "num_base_bdevs": 3, 00:09:32.698 "num_base_bdevs_discovered": 3, 00:09:32.698 "num_base_bdevs_operational": 3, 00:09:32.698 "base_bdevs_list": [ 00:09:32.698 { 00:09:32.698 "name": "BaseBdev1", 00:09:32.698 "uuid": "7ac7c1b4-056e-4030-8c6c-fc0b0e646d3c", 00:09:32.698 "is_configured": true, 00:09:32.698 "data_offset": 2048, 00:09:32.698 "data_size": 63488 00:09:32.698 }, 00:09:32.698 { 00:09:32.698 "name": "BaseBdev2", 00:09:32.698 "uuid": "51317333-64c4-4eda-86bf-bec14889d162", 00:09:32.698 "is_configured": true, 00:09:32.698 "data_offset": 2048, 00:09:32.698 "data_size": 63488 00:09:32.698 }, 00:09:32.698 { 00:09:32.698 "name": "BaseBdev3", 00:09:32.698 "uuid": "3d7c3451-be9d-4120-b1e5-d727144df2d2", 00:09:32.698 "is_configured": true, 00:09:32.698 "data_offset": 2048, 00:09:32.698 "data_size": 63488 00:09:32.698 } 00:09:32.698 ] 00:09:32.698 }' 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.698 02:42:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.280 [2024-12-07 02:42:44.106199] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.280 "name": "Existed_Raid", 00:09:33.280 "aliases": [ 00:09:33.280 "69813a3f-b23e-45ac-a447-e5bfb9b3f9c1" 00:09:33.280 ], 00:09:33.280 "product_name": "Raid Volume", 00:09:33.280 "block_size": 512, 00:09:33.280 "num_blocks": 63488, 00:09:33.280 "uuid": "69813a3f-b23e-45ac-a447-e5bfb9b3f9c1", 00:09:33.280 "assigned_rate_limits": { 00:09:33.280 "rw_ios_per_sec": 0, 00:09:33.280 "rw_mbytes_per_sec": 0, 00:09:33.280 "r_mbytes_per_sec": 0, 00:09:33.280 "w_mbytes_per_sec": 0 00:09:33.280 }, 00:09:33.280 "claimed": false, 00:09:33.280 "zoned": false, 00:09:33.280 "supported_io_types": { 00:09:33.280 "read": true, 00:09:33.280 "write": true, 00:09:33.280 "unmap": false, 00:09:33.280 "flush": false, 00:09:33.280 "reset": true, 00:09:33.280 "nvme_admin": false, 00:09:33.280 "nvme_io": false, 00:09:33.280 "nvme_io_md": false, 00:09:33.280 "write_zeroes": true, 00:09:33.280 "zcopy": false, 00:09:33.280 "get_zone_info": false, 00:09:33.280 "zone_management": false, 00:09:33.280 "zone_append": false, 00:09:33.280 "compare": false, 00:09:33.280 "compare_and_write": false, 00:09:33.280 "abort": false, 00:09:33.280 "seek_hole": false, 00:09:33.280 "seek_data": false, 00:09:33.280 "copy": false, 00:09:33.280 "nvme_iov_md": false 00:09:33.280 }, 00:09:33.280 "memory_domains": [ 00:09:33.280 { 00:09:33.280 "dma_device_id": "system", 00:09:33.280 "dma_device_type": 1 00:09:33.280 }, 00:09:33.280 { 00:09:33.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.280 "dma_device_type": 2 00:09:33.280 }, 00:09:33.280 { 00:09:33.280 "dma_device_id": "system", 00:09:33.280 "dma_device_type": 1 00:09:33.280 }, 00:09:33.280 { 00:09:33.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.280 "dma_device_type": 2 00:09:33.280 }, 00:09:33.280 { 00:09:33.280 "dma_device_id": "system", 00:09:33.280 "dma_device_type": 1 00:09:33.280 }, 00:09:33.280 { 00:09:33.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.280 "dma_device_type": 2 00:09:33.280 } 00:09:33.280 ], 00:09:33.280 "driver_specific": { 00:09:33.280 "raid": { 00:09:33.280 "uuid": "69813a3f-b23e-45ac-a447-e5bfb9b3f9c1", 00:09:33.280 "strip_size_kb": 0, 00:09:33.280 "state": "online", 00:09:33.280 "raid_level": "raid1", 00:09:33.280 "superblock": true, 00:09:33.280 "num_base_bdevs": 3, 00:09:33.280 "num_base_bdevs_discovered": 3, 00:09:33.280 "num_base_bdevs_operational": 3, 00:09:33.280 "base_bdevs_list": [ 00:09:33.280 { 00:09:33.280 "name": "BaseBdev1", 00:09:33.280 "uuid": "7ac7c1b4-056e-4030-8c6c-fc0b0e646d3c", 00:09:33.280 "is_configured": true, 00:09:33.280 "data_offset": 2048, 00:09:33.280 "data_size": 63488 00:09:33.280 }, 00:09:33.280 { 00:09:33.280 "name": "BaseBdev2", 00:09:33.280 "uuid": "51317333-64c4-4eda-86bf-bec14889d162", 00:09:33.280 "is_configured": true, 00:09:33.280 "data_offset": 2048, 00:09:33.280 "data_size": 63488 00:09:33.280 }, 00:09:33.280 { 00:09:33.280 "name": "BaseBdev3", 00:09:33.280 "uuid": "3d7c3451-be9d-4120-b1e5-d727144df2d2", 00:09:33.280 "is_configured": true, 00:09:33.280 "data_offset": 2048, 00:09:33.280 "data_size": 63488 00:09:33.280 } 00:09:33.280 ] 00:09:33.280 } 00:09:33.280 } 00:09:33.280 }' 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:33.280 BaseBdev2 00:09:33.280 BaseBdev3' 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.280 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.281 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.281 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.281 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.281 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.281 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.281 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.281 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.281 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.281 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.281 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.541 [2024-12-07 02:42:44.389587] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.541 "name": "Existed_Raid", 00:09:33.541 "uuid": "69813a3f-b23e-45ac-a447-e5bfb9b3f9c1", 00:09:33.541 "strip_size_kb": 0, 00:09:33.541 "state": "online", 00:09:33.541 "raid_level": "raid1", 00:09:33.541 "superblock": true, 00:09:33.541 "num_base_bdevs": 3, 00:09:33.541 "num_base_bdevs_discovered": 2, 00:09:33.541 "num_base_bdevs_operational": 2, 00:09:33.541 "base_bdevs_list": [ 00:09:33.541 { 00:09:33.541 "name": null, 00:09:33.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.541 "is_configured": false, 00:09:33.541 "data_offset": 0, 00:09:33.541 "data_size": 63488 00:09:33.541 }, 00:09:33.541 { 00:09:33.541 "name": "BaseBdev2", 00:09:33.541 "uuid": "51317333-64c4-4eda-86bf-bec14889d162", 00:09:33.541 "is_configured": true, 00:09:33.541 "data_offset": 2048, 00:09:33.541 "data_size": 63488 00:09:33.541 }, 00:09:33.541 { 00:09:33.541 "name": "BaseBdev3", 00:09:33.541 "uuid": "3d7c3451-be9d-4120-b1e5-d727144df2d2", 00:09:33.541 "is_configured": true, 00:09:33.541 "data_offset": 2048, 00:09:33.541 "data_size": 63488 00:09:33.541 } 00:09:33.541 ] 00:09:33.541 }' 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.541 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.801 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:33.801 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:33.801 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.801 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.801 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.801 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:33.801 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.801 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:33.801 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:33.801 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:33.801 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.801 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.061 [2024-12-07 02:42:44.881603] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.061 [2024-12-07 02:42:44.961932] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:34.061 [2024-12-07 02:42:44.962052] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.061 [2024-12-07 02:42:44.982992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.061 [2024-12-07 02:42:44.983132] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.061 [2024-12-07 02:42:44.983151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.061 02:42:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.061 BaseBdev2 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.061 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.061 [ 00:09:34.062 { 00:09:34.062 "name": "BaseBdev2", 00:09:34.062 "aliases": [ 00:09:34.062 "cb3c6109-9b22-46fd-8297-216e856fa65c" 00:09:34.062 ], 00:09:34.062 "product_name": "Malloc disk", 00:09:34.062 "block_size": 512, 00:09:34.062 "num_blocks": 65536, 00:09:34.062 "uuid": "cb3c6109-9b22-46fd-8297-216e856fa65c", 00:09:34.062 "assigned_rate_limits": { 00:09:34.062 "rw_ios_per_sec": 0, 00:09:34.062 "rw_mbytes_per_sec": 0, 00:09:34.062 "r_mbytes_per_sec": 0, 00:09:34.062 "w_mbytes_per_sec": 0 00:09:34.062 }, 00:09:34.062 "claimed": false, 00:09:34.062 "zoned": false, 00:09:34.062 "supported_io_types": { 00:09:34.062 "read": true, 00:09:34.062 "write": true, 00:09:34.062 "unmap": true, 00:09:34.062 "flush": true, 00:09:34.062 "reset": true, 00:09:34.062 "nvme_admin": false, 00:09:34.062 "nvme_io": false, 00:09:34.062 "nvme_io_md": false, 00:09:34.062 "write_zeroes": true, 00:09:34.062 "zcopy": true, 00:09:34.062 "get_zone_info": false, 00:09:34.062 "zone_management": false, 00:09:34.062 "zone_append": false, 00:09:34.062 "compare": false, 00:09:34.062 "compare_and_write": false, 00:09:34.062 "abort": true, 00:09:34.062 "seek_hole": false, 00:09:34.062 "seek_data": false, 00:09:34.062 "copy": true, 00:09:34.062 "nvme_iov_md": false 00:09:34.062 }, 00:09:34.062 "memory_domains": [ 00:09:34.062 { 00:09:34.062 "dma_device_id": "system", 00:09:34.062 "dma_device_type": 1 00:09:34.062 }, 00:09:34.062 { 00:09:34.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.062 "dma_device_type": 2 00:09:34.062 } 00:09:34.062 ], 00:09:34.062 "driver_specific": {} 00:09:34.062 } 00:09:34.062 ] 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.062 BaseBdev3 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.062 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.321 [ 00:09:34.321 { 00:09:34.321 "name": "BaseBdev3", 00:09:34.321 "aliases": [ 00:09:34.321 "4edca3af-a90a-4644-bcc7-14236fa7e640" 00:09:34.321 ], 00:09:34.321 "product_name": "Malloc disk", 00:09:34.321 "block_size": 512, 00:09:34.321 "num_blocks": 65536, 00:09:34.321 "uuid": "4edca3af-a90a-4644-bcc7-14236fa7e640", 00:09:34.321 "assigned_rate_limits": { 00:09:34.321 "rw_ios_per_sec": 0, 00:09:34.321 "rw_mbytes_per_sec": 0, 00:09:34.321 "r_mbytes_per_sec": 0, 00:09:34.321 "w_mbytes_per_sec": 0 00:09:34.321 }, 00:09:34.321 "claimed": false, 00:09:34.321 "zoned": false, 00:09:34.321 "supported_io_types": { 00:09:34.321 "read": true, 00:09:34.322 "write": true, 00:09:34.322 "unmap": true, 00:09:34.322 "flush": true, 00:09:34.322 "reset": true, 00:09:34.322 "nvme_admin": false, 00:09:34.322 "nvme_io": false, 00:09:34.322 "nvme_io_md": false, 00:09:34.322 "write_zeroes": true, 00:09:34.322 "zcopy": true, 00:09:34.322 "get_zone_info": false, 00:09:34.322 "zone_management": false, 00:09:34.322 "zone_append": false, 00:09:34.322 "compare": false, 00:09:34.322 "compare_and_write": false, 00:09:34.322 "abort": true, 00:09:34.322 "seek_hole": false, 00:09:34.322 "seek_data": false, 00:09:34.322 "copy": true, 00:09:34.322 "nvme_iov_md": false 00:09:34.322 }, 00:09:34.322 "memory_domains": [ 00:09:34.322 { 00:09:34.322 "dma_device_id": "system", 00:09:34.322 "dma_device_type": 1 00:09:34.322 }, 00:09:34.322 { 00:09:34.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.322 "dma_device_type": 2 00:09:34.322 } 00:09:34.322 ], 00:09:34.322 "driver_specific": {} 00:09:34.322 } 00:09:34.322 ] 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.322 [2024-12-07 02:42:45.156182] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.322 [2024-12-07 02:42:45.156237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.322 [2024-12-07 02:42:45.156257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.322 [2024-12-07 02:42:45.158288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.322 "name": "Existed_Raid", 00:09:34.322 "uuid": "165c7370-4c6e-4771-b72b-2f0002c0e3cb", 00:09:34.322 "strip_size_kb": 0, 00:09:34.322 "state": "configuring", 00:09:34.322 "raid_level": "raid1", 00:09:34.322 "superblock": true, 00:09:34.322 "num_base_bdevs": 3, 00:09:34.322 "num_base_bdevs_discovered": 2, 00:09:34.322 "num_base_bdevs_operational": 3, 00:09:34.322 "base_bdevs_list": [ 00:09:34.322 { 00:09:34.322 "name": "BaseBdev1", 00:09:34.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.322 "is_configured": false, 00:09:34.322 "data_offset": 0, 00:09:34.322 "data_size": 0 00:09:34.322 }, 00:09:34.322 { 00:09:34.322 "name": "BaseBdev2", 00:09:34.322 "uuid": "cb3c6109-9b22-46fd-8297-216e856fa65c", 00:09:34.322 "is_configured": true, 00:09:34.322 "data_offset": 2048, 00:09:34.322 "data_size": 63488 00:09:34.322 }, 00:09:34.322 { 00:09:34.322 "name": "BaseBdev3", 00:09:34.322 "uuid": "4edca3af-a90a-4644-bcc7-14236fa7e640", 00:09:34.322 "is_configured": true, 00:09:34.322 "data_offset": 2048, 00:09:34.322 "data_size": 63488 00:09:34.322 } 00:09:34.322 ] 00:09:34.322 }' 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.322 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.582 [2024-12-07 02:42:45.615382] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.582 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.842 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.842 "name": "Existed_Raid", 00:09:34.842 "uuid": "165c7370-4c6e-4771-b72b-2f0002c0e3cb", 00:09:34.842 "strip_size_kb": 0, 00:09:34.842 "state": "configuring", 00:09:34.842 "raid_level": "raid1", 00:09:34.842 "superblock": true, 00:09:34.842 "num_base_bdevs": 3, 00:09:34.842 "num_base_bdevs_discovered": 1, 00:09:34.842 "num_base_bdevs_operational": 3, 00:09:34.842 "base_bdevs_list": [ 00:09:34.842 { 00:09:34.842 "name": "BaseBdev1", 00:09:34.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.842 "is_configured": false, 00:09:34.842 "data_offset": 0, 00:09:34.842 "data_size": 0 00:09:34.842 }, 00:09:34.842 { 00:09:34.842 "name": null, 00:09:34.842 "uuid": "cb3c6109-9b22-46fd-8297-216e856fa65c", 00:09:34.842 "is_configured": false, 00:09:34.842 "data_offset": 0, 00:09:34.842 "data_size": 63488 00:09:34.842 }, 00:09:34.842 { 00:09:34.842 "name": "BaseBdev3", 00:09:34.842 "uuid": "4edca3af-a90a-4644-bcc7-14236fa7e640", 00:09:34.842 "is_configured": true, 00:09:34.842 "data_offset": 2048, 00:09:34.842 "data_size": 63488 00:09:34.842 } 00:09:34.842 ] 00:09:34.842 }' 00:09:34.842 02:42:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.842 02:42:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.102 [2024-12-07 02:42:46.103434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.102 BaseBdev1 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.102 [ 00:09:35.102 { 00:09:35.102 "name": "BaseBdev1", 00:09:35.102 "aliases": [ 00:09:35.102 "ab9ca4d7-503d-463a-a7ff-f533bd488b39" 00:09:35.102 ], 00:09:35.102 "product_name": "Malloc disk", 00:09:35.102 "block_size": 512, 00:09:35.102 "num_blocks": 65536, 00:09:35.102 "uuid": "ab9ca4d7-503d-463a-a7ff-f533bd488b39", 00:09:35.102 "assigned_rate_limits": { 00:09:35.102 "rw_ios_per_sec": 0, 00:09:35.102 "rw_mbytes_per_sec": 0, 00:09:35.102 "r_mbytes_per_sec": 0, 00:09:35.102 "w_mbytes_per_sec": 0 00:09:35.102 }, 00:09:35.102 "claimed": true, 00:09:35.102 "claim_type": "exclusive_write", 00:09:35.102 "zoned": false, 00:09:35.102 "supported_io_types": { 00:09:35.102 "read": true, 00:09:35.102 "write": true, 00:09:35.102 "unmap": true, 00:09:35.102 "flush": true, 00:09:35.102 "reset": true, 00:09:35.102 "nvme_admin": false, 00:09:35.102 "nvme_io": false, 00:09:35.102 "nvme_io_md": false, 00:09:35.102 "write_zeroes": true, 00:09:35.102 "zcopy": true, 00:09:35.102 "get_zone_info": false, 00:09:35.102 "zone_management": false, 00:09:35.102 "zone_append": false, 00:09:35.102 "compare": false, 00:09:35.102 "compare_and_write": false, 00:09:35.102 "abort": true, 00:09:35.102 "seek_hole": false, 00:09:35.102 "seek_data": false, 00:09:35.102 "copy": true, 00:09:35.102 "nvme_iov_md": false 00:09:35.102 }, 00:09:35.102 "memory_domains": [ 00:09:35.102 { 00:09:35.102 "dma_device_id": "system", 00:09:35.102 "dma_device_type": 1 00:09:35.102 }, 00:09:35.102 { 00:09:35.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.102 "dma_device_type": 2 00:09:35.102 } 00:09:35.102 ], 00:09:35.102 "driver_specific": {} 00:09:35.102 } 00:09:35.102 ] 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.102 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.362 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.362 "name": "Existed_Raid", 00:09:35.362 "uuid": "165c7370-4c6e-4771-b72b-2f0002c0e3cb", 00:09:35.362 "strip_size_kb": 0, 00:09:35.362 "state": "configuring", 00:09:35.362 "raid_level": "raid1", 00:09:35.362 "superblock": true, 00:09:35.362 "num_base_bdevs": 3, 00:09:35.362 "num_base_bdevs_discovered": 2, 00:09:35.362 "num_base_bdevs_operational": 3, 00:09:35.362 "base_bdevs_list": [ 00:09:35.362 { 00:09:35.362 "name": "BaseBdev1", 00:09:35.362 "uuid": "ab9ca4d7-503d-463a-a7ff-f533bd488b39", 00:09:35.362 "is_configured": true, 00:09:35.362 "data_offset": 2048, 00:09:35.362 "data_size": 63488 00:09:35.362 }, 00:09:35.362 { 00:09:35.362 "name": null, 00:09:35.362 "uuid": "cb3c6109-9b22-46fd-8297-216e856fa65c", 00:09:35.362 "is_configured": false, 00:09:35.362 "data_offset": 0, 00:09:35.362 "data_size": 63488 00:09:35.362 }, 00:09:35.362 { 00:09:35.362 "name": "BaseBdev3", 00:09:35.362 "uuid": "4edca3af-a90a-4644-bcc7-14236fa7e640", 00:09:35.362 "is_configured": true, 00:09:35.362 "data_offset": 2048, 00:09:35.362 "data_size": 63488 00:09:35.362 } 00:09:35.362 ] 00:09:35.362 }' 00:09:35.362 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.362 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.622 [2024-12-07 02:42:46.594610] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.622 "name": "Existed_Raid", 00:09:35.622 "uuid": "165c7370-4c6e-4771-b72b-2f0002c0e3cb", 00:09:35.622 "strip_size_kb": 0, 00:09:35.622 "state": "configuring", 00:09:35.622 "raid_level": "raid1", 00:09:35.622 "superblock": true, 00:09:35.622 "num_base_bdevs": 3, 00:09:35.622 "num_base_bdevs_discovered": 1, 00:09:35.622 "num_base_bdevs_operational": 3, 00:09:35.622 "base_bdevs_list": [ 00:09:35.622 { 00:09:35.622 "name": "BaseBdev1", 00:09:35.622 "uuid": "ab9ca4d7-503d-463a-a7ff-f533bd488b39", 00:09:35.622 "is_configured": true, 00:09:35.622 "data_offset": 2048, 00:09:35.622 "data_size": 63488 00:09:35.622 }, 00:09:35.622 { 00:09:35.622 "name": null, 00:09:35.622 "uuid": "cb3c6109-9b22-46fd-8297-216e856fa65c", 00:09:35.622 "is_configured": false, 00:09:35.622 "data_offset": 0, 00:09:35.622 "data_size": 63488 00:09:35.622 }, 00:09:35.622 { 00:09:35.622 "name": null, 00:09:35.622 "uuid": "4edca3af-a90a-4644-bcc7-14236fa7e640", 00:09:35.622 "is_configured": false, 00:09:35.622 "data_offset": 0, 00:09:35.622 "data_size": 63488 00:09:35.622 } 00:09:35.622 ] 00:09:35.622 }' 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.622 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.193 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.193 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.193 02:42:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:36.193 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.193 02:42:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.193 [2024-12-07 02:42:47.013894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.193 "name": "Existed_Raid", 00:09:36.193 "uuid": "165c7370-4c6e-4771-b72b-2f0002c0e3cb", 00:09:36.193 "strip_size_kb": 0, 00:09:36.193 "state": "configuring", 00:09:36.193 "raid_level": "raid1", 00:09:36.193 "superblock": true, 00:09:36.193 "num_base_bdevs": 3, 00:09:36.193 "num_base_bdevs_discovered": 2, 00:09:36.193 "num_base_bdevs_operational": 3, 00:09:36.193 "base_bdevs_list": [ 00:09:36.193 { 00:09:36.193 "name": "BaseBdev1", 00:09:36.193 "uuid": "ab9ca4d7-503d-463a-a7ff-f533bd488b39", 00:09:36.193 "is_configured": true, 00:09:36.193 "data_offset": 2048, 00:09:36.193 "data_size": 63488 00:09:36.193 }, 00:09:36.193 { 00:09:36.193 "name": null, 00:09:36.193 "uuid": "cb3c6109-9b22-46fd-8297-216e856fa65c", 00:09:36.193 "is_configured": false, 00:09:36.193 "data_offset": 0, 00:09:36.193 "data_size": 63488 00:09:36.193 }, 00:09:36.193 { 00:09:36.193 "name": "BaseBdev3", 00:09:36.193 "uuid": "4edca3af-a90a-4644-bcc7-14236fa7e640", 00:09:36.193 "is_configured": true, 00:09:36.193 "data_offset": 2048, 00:09:36.193 "data_size": 63488 00:09:36.193 } 00:09:36.193 ] 00:09:36.193 }' 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.193 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.454 [2024-12-07 02:42:47.361340] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.454 "name": "Existed_Raid", 00:09:36.454 "uuid": "165c7370-4c6e-4771-b72b-2f0002c0e3cb", 00:09:36.454 "strip_size_kb": 0, 00:09:36.454 "state": "configuring", 00:09:36.454 "raid_level": "raid1", 00:09:36.454 "superblock": true, 00:09:36.454 "num_base_bdevs": 3, 00:09:36.454 "num_base_bdevs_discovered": 1, 00:09:36.454 "num_base_bdevs_operational": 3, 00:09:36.454 "base_bdevs_list": [ 00:09:36.454 { 00:09:36.454 "name": null, 00:09:36.454 "uuid": "ab9ca4d7-503d-463a-a7ff-f533bd488b39", 00:09:36.454 "is_configured": false, 00:09:36.454 "data_offset": 0, 00:09:36.454 "data_size": 63488 00:09:36.454 }, 00:09:36.454 { 00:09:36.454 "name": null, 00:09:36.454 "uuid": "cb3c6109-9b22-46fd-8297-216e856fa65c", 00:09:36.454 "is_configured": false, 00:09:36.454 "data_offset": 0, 00:09:36.454 "data_size": 63488 00:09:36.454 }, 00:09:36.454 { 00:09:36.454 "name": "BaseBdev3", 00:09:36.454 "uuid": "4edca3af-a90a-4644-bcc7-14236fa7e640", 00:09:36.454 "is_configured": true, 00:09:36.454 "data_offset": 2048, 00:09:36.454 "data_size": 63488 00:09:36.454 } 00:09:36.454 ] 00:09:36.454 }' 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.454 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.024 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.024 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.024 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.024 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.025 [2024-12-07 02:42:47.848425] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.025 "name": "Existed_Raid", 00:09:37.025 "uuid": "165c7370-4c6e-4771-b72b-2f0002c0e3cb", 00:09:37.025 "strip_size_kb": 0, 00:09:37.025 "state": "configuring", 00:09:37.025 "raid_level": "raid1", 00:09:37.025 "superblock": true, 00:09:37.025 "num_base_bdevs": 3, 00:09:37.025 "num_base_bdevs_discovered": 2, 00:09:37.025 "num_base_bdevs_operational": 3, 00:09:37.025 "base_bdevs_list": [ 00:09:37.025 { 00:09:37.025 "name": null, 00:09:37.025 "uuid": "ab9ca4d7-503d-463a-a7ff-f533bd488b39", 00:09:37.025 "is_configured": false, 00:09:37.025 "data_offset": 0, 00:09:37.025 "data_size": 63488 00:09:37.025 }, 00:09:37.025 { 00:09:37.025 "name": "BaseBdev2", 00:09:37.025 "uuid": "cb3c6109-9b22-46fd-8297-216e856fa65c", 00:09:37.025 "is_configured": true, 00:09:37.025 "data_offset": 2048, 00:09:37.025 "data_size": 63488 00:09:37.025 }, 00:09:37.025 { 00:09:37.025 "name": "BaseBdev3", 00:09:37.025 "uuid": "4edca3af-a90a-4644-bcc7-14236fa7e640", 00:09:37.025 "is_configured": true, 00:09:37.025 "data_offset": 2048, 00:09:37.025 "data_size": 63488 00:09:37.025 } 00:09:37.025 ] 00:09:37.025 }' 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.025 02:42:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.287 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.287 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.287 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.287 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.287 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.287 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:37.287 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.287 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:37.287 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.287 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.287 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ab9ca4d7-503d-463a-a7ff-f533bd488b39 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.554 [2024-12-07 02:42:48.408297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:37.554 [2024-12-07 02:42:48.408571] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:37.554 [2024-12-07 02:42:48.408631] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:37.554 [2024-12-07 02:42:48.408949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:37.554 [2024-12-07 02:42:48.409126] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:37.554 NewBaseBdev 00:09:37.554 [2024-12-07 02:42:48.409179] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:09:37.554 [2024-12-07 02:42:48.409289] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.554 [ 00:09:37.554 { 00:09:37.554 "name": "NewBaseBdev", 00:09:37.554 "aliases": [ 00:09:37.554 "ab9ca4d7-503d-463a-a7ff-f533bd488b39" 00:09:37.554 ], 00:09:37.554 "product_name": "Malloc disk", 00:09:37.554 "block_size": 512, 00:09:37.554 "num_blocks": 65536, 00:09:37.554 "uuid": "ab9ca4d7-503d-463a-a7ff-f533bd488b39", 00:09:37.554 "assigned_rate_limits": { 00:09:37.554 "rw_ios_per_sec": 0, 00:09:37.554 "rw_mbytes_per_sec": 0, 00:09:37.554 "r_mbytes_per_sec": 0, 00:09:37.554 "w_mbytes_per_sec": 0 00:09:37.554 }, 00:09:37.554 "claimed": true, 00:09:37.554 "claim_type": "exclusive_write", 00:09:37.554 "zoned": false, 00:09:37.554 "supported_io_types": { 00:09:37.554 "read": true, 00:09:37.554 "write": true, 00:09:37.554 "unmap": true, 00:09:37.554 "flush": true, 00:09:37.554 "reset": true, 00:09:37.554 "nvme_admin": false, 00:09:37.554 "nvme_io": false, 00:09:37.554 "nvme_io_md": false, 00:09:37.554 "write_zeroes": true, 00:09:37.554 "zcopy": true, 00:09:37.554 "get_zone_info": false, 00:09:37.554 "zone_management": false, 00:09:37.554 "zone_append": false, 00:09:37.554 "compare": false, 00:09:37.554 "compare_and_write": false, 00:09:37.554 "abort": true, 00:09:37.554 "seek_hole": false, 00:09:37.554 "seek_data": false, 00:09:37.554 "copy": true, 00:09:37.554 "nvme_iov_md": false 00:09:37.554 }, 00:09:37.554 "memory_domains": [ 00:09:37.554 { 00:09:37.554 "dma_device_id": "system", 00:09:37.554 "dma_device_type": 1 00:09:37.554 }, 00:09:37.554 { 00:09:37.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.554 "dma_device_type": 2 00:09:37.554 } 00:09:37.554 ], 00:09:37.554 "driver_specific": {} 00:09:37.554 } 00:09:37.554 ] 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.554 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.554 "name": "Existed_Raid", 00:09:37.554 "uuid": "165c7370-4c6e-4771-b72b-2f0002c0e3cb", 00:09:37.554 "strip_size_kb": 0, 00:09:37.554 "state": "online", 00:09:37.554 "raid_level": "raid1", 00:09:37.554 "superblock": true, 00:09:37.554 "num_base_bdevs": 3, 00:09:37.555 "num_base_bdevs_discovered": 3, 00:09:37.555 "num_base_bdevs_operational": 3, 00:09:37.555 "base_bdevs_list": [ 00:09:37.555 { 00:09:37.555 "name": "NewBaseBdev", 00:09:37.555 "uuid": "ab9ca4d7-503d-463a-a7ff-f533bd488b39", 00:09:37.555 "is_configured": true, 00:09:37.555 "data_offset": 2048, 00:09:37.555 "data_size": 63488 00:09:37.555 }, 00:09:37.555 { 00:09:37.555 "name": "BaseBdev2", 00:09:37.555 "uuid": "cb3c6109-9b22-46fd-8297-216e856fa65c", 00:09:37.555 "is_configured": true, 00:09:37.555 "data_offset": 2048, 00:09:37.555 "data_size": 63488 00:09:37.555 }, 00:09:37.555 { 00:09:37.555 "name": "BaseBdev3", 00:09:37.555 "uuid": "4edca3af-a90a-4644-bcc7-14236fa7e640", 00:09:37.555 "is_configured": true, 00:09:37.555 "data_offset": 2048, 00:09:37.555 "data_size": 63488 00:09:37.555 } 00:09:37.555 ] 00:09:37.555 }' 00:09:37.555 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.555 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.831 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:37.831 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:37.831 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:37.831 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:37.831 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:37.831 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.831 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:37.831 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.831 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.831 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.831 [2024-12-07 02:42:48.899850] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.092 02:42:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.092 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.092 "name": "Existed_Raid", 00:09:38.092 "aliases": [ 00:09:38.092 "165c7370-4c6e-4771-b72b-2f0002c0e3cb" 00:09:38.092 ], 00:09:38.092 "product_name": "Raid Volume", 00:09:38.092 "block_size": 512, 00:09:38.092 "num_blocks": 63488, 00:09:38.092 "uuid": "165c7370-4c6e-4771-b72b-2f0002c0e3cb", 00:09:38.092 "assigned_rate_limits": { 00:09:38.092 "rw_ios_per_sec": 0, 00:09:38.092 "rw_mbytes_per_sec": 0, 00:09:38.092 "r_mbytes_per_sec": 0, 00:09:38.092 "w_mbytes_per_sec": 0 00:09:38.092 }, 00:09:38.092 "claimed": false, 00:09:38.092 "zoned": false, 00:09:38.092 "supported_io_types": { 00:09:38.092 "read": true, 00:09:38.092 "write": true, 00:09:38.092 "unmap": false, 00:09:38.092 "flush": false, 00:09:38.092 "reset": true, 00:09:38.092 "nvme_admin": false, 00:09:38.092 "nvme_io": false, 00:09:38.092 "nvme_io_md": false, 00:09:38.092 "write_zeroes": true, 00:09:38.092 "zcopy": false, 00:09:38.092 "get_zone_info": false, 00:09:38.092 "zone_management": false, 00:09:38.092 "zone_append": false, 00:09:38.092 "compare": false, 00:09:38.092 "compare_and_write": false, 00:09:38.092 "abort": false, 00:09:38.092 "seek_hole": false, 00:09:38.092 "seek_data": false, 00:09:38.092 "copy": false, 00:09:38.092 "nvme_iov_md": false 00:09:38.092 }, 00:09:38.092 "memory_domains": [ 00:09:38.092 { 00:09:38.092 "dma_device_id": "system", 00:09:38.092 "dma_device_type": 1 00:09:38.092 }, 00:09:38.092 { 00:09:38.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.092 "dma_device_type": 2 00:09:38.092 }, 00:09:38.092 { 00:09:38.092 "dma_device_id": "system", 00:09:38.092 "dma_device_type": 1 00:09:38.092 }, 00:09:38.092 { 00:09:38.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.092 "dma_device_type": 2 00:09:38.092 }, 00:09:38.092 { 00:09:38.092 "dma_device_id": "system", 00:09:38.092 "dma_device_type": 1 00:09:38.092 }, 00:09:38.092 { 00:09:38.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.092 "dma_device_type": 2 00:09:38.092 } 00:09:38.092 ], 00:09:38.092 "driver_specific": { 00:09:38.092 "raid": { 00:09:38.092 "uuid": "165c7370-4c6e-4771-b72b-2f0002c0e3cb", 00:09:38.092 "strip_size_kb": 0, 00:09:38.092 "state": "online", 00:09:38.092 "raid_level": "raid1", 00:09:38.092 "superblock": true, 00:09:38.092 "num_base_bdevs": 3, 00:09:38.092 "num_base_bdevs_discovered": 3, 00:09:38.092 "num_base_bdevs_operational": 3, 00:09:38.092 "base_bdevs_list": [ 00:09:38.092 { 00:09:38.092 "name": "NewBaseBdev", 00:09:38.092 "uuid": "ab9ca4d7-503d-463a-a7ff-f533bd488b39", 00:09:38.092 "is_configured": true, 00:09:38.092 "data_offset": 2048, 00:09:38.092 "data_size": 63488 00:09:38.092 }, 00:09:38.092 { 00:09:38.092 "name": "BaseBdev2", 00:09:38.092 "uuid": "cb3c6109-9b22-46fd-8297-216e856fa65c", 00:09:38.092 "is_configured": true, 00:09:38.092 "data_offset": 2048, 00:09:38.092 "data_size": 63488 00:09:38.092 }, 00:09:38.092 { 00:09:38.092 "name": "BaseBdev3", 00:09:38.092 "uuid": "4edca3af-a90a-4644-bcc7-14236fa7e640", 00:09:38.092 "is_configured": true, 00:09:38.092 "data_offset": 2048, 00:09:38.092 "data_size": 63488 00:09:38.092 } 00:09:38.092 ] 00:09:38.092 } 00:09:38.092 } 00:09:38.092 }' 00:09:38.092 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.092 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:38.092 BaseBdev2 00:09:38.092 BaseBdev3' 00:09:38.092 02:42:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.092 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.352 [2024-12-07 02:42:49.171040] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.352 [2024-12-07 02:42:49.171112] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.352 [2024-12-07 02:42:49.171207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.352 [2024-12-07 02:42:49.171524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.352 [2024-12-07 02:42:49.171592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:09:38.352 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.352 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 79282 00:09:38.352 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 79282 ']' 00:09:38.352 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 79282 00:09:38.352 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:38.352 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.352 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79282 00:09:38.352 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.352 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.352 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79282' 00:09:38.352 killing process with pid 79282 00:09:38.352 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 79282 00:09:38.352 [2024-12-07 02:42:49.214781] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.352 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 79282 00:09:38.352 [2024-12-07 02:42:49.273469] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.612 02:42:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:38.612 00:09:38.612 real 0m8.796s 00:09:38.612 user 0m14.664s 00:09:38.612 sys 0m1.904s 00:09:38.612 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.612 ************************************ 00:09:38.612 END TEST raid_state_function_test_sb 00:09:38.612 ************************************ 00:09:38.612 02:42:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.872 02:42:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:38.872 02:42:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:38.872 02:42:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.872 02:42:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.872 ************************************ 00:09:38.872 START TEST raid_superblock_test 00:09:38.872 ************************************ 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79886 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79886 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79886 ']' 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.872 02:42:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.872 [2024-12-07 02:42:49.810313] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:38.872 [2024-12-07 02:42:49.810434] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79886 ] 00:09:39.132 [2024-12-07 02:42:49.975305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.132 [2024-12-07 02:42:50.045332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.132 [2024-12-07 02:42:50.121580] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.132 [2024-12-07 02:42:50.121645] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.701 malloc1 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.701 [2024-12-07 02:42:50.655586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:39.701 [2024-12-07 02:42:50.655672] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.701 [2024-12-07 02:42:50.655694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:39.701 [2024-12-07 02:42:50.655717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.701 [2024-12-07 02:42:50.658131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.701 [2024-12-07 02:42:50.658167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:39.701 pt1 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.701 malloc2 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.701 [2024-12-07 02:42:50.707682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:39.701 [2024-12-07 02:42:50.707883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.701 [2024-12-07 02:42:50.707955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:39.701 [2024-12-07 02:42:50.708035] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.701 [2024-12-07 02:42:50.712659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.701 [2024-12-07 02:42:50.712789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:39.701 pt2 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.701 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.702 malloc3 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.702 [2024-12-07 02:42:50.747793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:39.702 [2024-12-07 02:42:50.747890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.702 [2024-12-07 02:42:50.747925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:39.702 [2024-12-07 02:42:50.747953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.702 [2024-12-07 02:42:50.750358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.702 [2024-12-07 02:42:50.750428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:39.702 pt3 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.702 [2024-12-07 02:42:50.759836] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:39.702 [2024-12-07 02:42:50.762023] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:39.702 [2024-12-07 02:42:50.762136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:39.702 [2024-12-07 02:42:50.762313] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:09:39.702 [2024-12-07 02:42:50.762358] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:39.702 [2024-12-07 02:42:50.762671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:39.702 [2024-12-07 02:42:50.762851] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:09:39.702 [2024-12-07 02:42:50.762900] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:09:39.702 [2024-12-07 02:42:50.763081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.702 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.962 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.962 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.962 "name": "raid_bdev1", 00:09:39.962 "uuid": "9c2b561d-f4dd-4568-bcba-4dc79263104a", 00:09:39.962 "strip_size_kb": 0, 00:09:39.962 "state": "online", 00:09:39.962 "raid_level": "raid1", 00:09:39.962 "superblock": true, 00:09:39.962 "num_base_bdevs": 3, 00:09:39.962 "num_base_bdevs_discovered": 3, 00:09:39.962 "num_base_bdevs_operational": 3, 00:09:39.962 "base_bdevs_list": [ 00:09:39.962 { 00:09:39.962 "name": "pt1", 00:09:39.962 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.962 "is_configured": true, 00:09:39.962 "data_offset": 2048, 00:09:39.962 "data_size": 63488 00:09:39.962 }, 00:09:39.962 { 00:09:39.962 "name": "pt2", 00:09:39.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.962 "is_configured": true, 00:09:39.962 "data_offset": 2048, 00:09:39.962 "data_size": 63488 00:09:39.962 }, 00:09:39.962 { 00:09:39.962 "name": "pt3", 00:09:39.962 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.962 "is_configured": true, 00:09:39.962 "data_offset": 2048, 00:09:39.962 "data_size": 63488 00:09:39.962 } 00:09:39.962 ] 00:09:39.962 }' 00:09:39.962 02:42:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.962 02:42:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.222 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:40.222 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:40.222 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.222 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.222 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.222 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.222 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.223 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.223 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.223 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.223 [2024-12-07 02:42:51.203459] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.223 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.223 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.223 "name": "raid_bdev1", 00:09:40.223 "aliases": [ 00:09:40.223 "9c2b561d-f4dd-4568-bcba-4dc79263104a" 00:09:40.223 ], 00:09:40.223 "product_name": "Raid Volume", 00:09:40.223 "block_size": 512, 00:09:40.223 "num_blocks": 63488, 00:09:40.223 "uuid": "9c2b561d-f4dd-4568-bcba-4dc79263104a", 00:09:40.223 "assigned_rate_limits": { 00:09:40.223 "rw_ios_per_sec": 0, 00:09:40.223 "rw_mbytes_per_sec": 0, 00:09:40.223 "r_mbytes_per_sec": 0, 00:09:40.223 "w_mbytes_per_sec": 0 00:09:40.223 }, 00:09:40.223 "claimed": false, 00:09:40.223 "zoned": false, 00:09:40.223 "supported_io_types": { 00:09:40.223 "read": true, 00:09:40.223 "write": true, 00:09:40.223 "unmap": false, 00:09:40.223 "flush": false, 00:09:40.223 "reset": true, 00:09:40.223 "nvme_admin": false, 00:09:40.223 "nvme_io": false, 00:09:40.223 "nvme_io_md": false, 00:09:40.223 "write_zeroes": true, 00:09:40.223 "zcopy": false, 00:09:40.223 "get_zone_info": false, 00:09:40.223 "zone_management": false, 00:09:40.223 "zone_append": false, 00:09:40.223 "compare": false, 00:09:40.223 "compare_and_write": false, 00:09:40.223 "abort": false, 00:09:40.223 "seek_hole": false, 00:09:40.223 "seek_data": false, 00:09:40.223 "copy": false, 00:09:40.223 "nvme_iov_md": false 00:09:40.223 }, 00:09:40.223 "memory_domains": [ 00:09:40.223 { 00:09:40.223 "dma_device_id": "system", 00:09:40.223 "dma_device_type": 1 00:09:40.223 }, 00:09:40.223 { 00:09:40.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.223 "dma_device_type": 2 00:09:40.223 }, 00:09:40.223 { 00:09:40.223 "dma_device_id": "system", 00:09:40.223 "dma_device_type": 1 00:09:40.223 }, 00:09:40.223 { 00:09:40.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.223 "dma_device_type": 2 00:09:40.223 }, 00:09:40.223 { 00:09:40.223 "dma_device_id": "system", 00:09:40.223 "dma_device_type": 1 00:09:40.223 }, 00:09:40.223 { 00:09:40.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.223 "dma_device_type": 2 00:09:40.223 } 00:09:40.223 ], 00:09:40.223 "driver_specific": { 00:09:40.223 "raid": { 00:09:40.223 "uuid": "9c2b561d-f4dd-4568-bcba-4dc79263104a", 00:09:40.223 "strip_size_kb": 0, 00:09:40.223 "state": "online", 00:09:40.223 "raid_level": "raid1", 00:09:40.223 "superblock": true, 00:09:40.223 "num_base_bdevs": 3, 00:09:40.223 "num_base_bdevs_discovered": 3, 00:09:40.223 "num_base_bdevs_operational": 3, 00:09:40.223 "base_bdevs_list": [ 00:09:40.223 { 00:09:40.223 "name": "pt1", 00:09:40.223 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.223 "is_configured": true, 00:09:40.223 "data_offset": 2048, 00:09:40.223 "data_size": 63488 00:09:40.223 }, 00:09:40.223 { 00:09:40.223 "name": "pt2", 00:09:40.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.223 "is_configured": true, 00:09:40.223 "data_offset": 2048, 00:09:40.223 "data_size": 63488 00:09:40.223 }, 00:09:40.223 { 00:09:40.223 "name": "pt3", 00:09:40.223 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.223 "is_configured": true, 00:09:40.223 "data_offset": 2048, 00:09:40.223 "data_size": 63488 00:09:40.223 } 00:09:40.223 ] 00:09:40.223 } 00:09:40.223 } 00:09:40.223 }' 00:09:40.223 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.223 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:40.223 pt2 00:09:40.223 pt3' 00:09:40.223 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.484 [2024-12-07 02:42:51.482933] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9c2b561d-f4dd-4568-bcba-4dc79263104a 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9c2b561d-f4dd-4568-bcba-4dc79263104a ']' 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.484 [2024-12-07 02:42:51.526614] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.484 [2024-12-07 02:42:51.526639] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.484 [2024-12-07 02:42:51.526732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.484 [2024-12-07 02:42:51.526812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.484 [2024-12-07 02:42:51.526831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.484 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.745 [2024-12-07 02:42:51.674357] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:40.745 [2024-12-07 02:42:51.676552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:40.745 [2024-12-07 02:42:51.676611] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:40.745 [2024-12-07 02:42:51.676667] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:40.745 [2024-12-07 02:42:51.676709] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:40.745 [2024-12-07 02:42:51.676728] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:40.745 [2024-12-07 02:42:51.676740] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.745 [2024-12-07 02:42:51.676759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:09:40.745 request: 00:09:40.745 { 00:09:40.745 "name": "raid_bdev1", 00:09:40.745 "raid_level": "raid1", 00:09:40.745 "base_bdevs": [ 00:09:40.745 "malloc1", 00:09:40.745 "malloc2", 00:09:40.745 "malloc3" 00:09:40.745 ], 00:09:40.745 "superblock": false, 00:09:40.745 "method": "bdev_raid_create", 00:09:40.745 "req_id": 1 00:09:40.745 } 00:09:40.745 Got JSON-RPC error response 00:09:40.745 response: 00:09:40.745 { 00:09:40.745 "code": -17, 00:09:40.745 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:40.745 } 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.745 [2024-12-07 02:42:51.738213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:40.745 [2024-12-07 02:42:51.738266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.745 [2024-12-07 02:42:51.738283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:40.745 [2024-12-07 02:42:51.738294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.745 [2024-12-07 02:42:51.740726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.745 [2024-12-07 02:42:51.740788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:40.745 [2024-12-07 02:42:51.740853] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:40.745 [2024-12-07 02:42:51.740896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:40.745 pt1 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.745 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.746 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.746 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.746 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.746 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.746 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.746 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.746 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.746 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.746 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.746 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.746 "name": "raid_bdev1", 00:09:40.746 "uuid": "9c2b561d-f4dd-4568-bcba-4dc79263104a", 00:09:40.746 "strip_size_kb": 0, 00:09:40.746 "state": "configuring", 00:09:40.746 "raid_level": "raid1", 00:09:40.746 "superblock": true, 00:09:40.746 "num_base_bdevs": 3, 00:09:40.746 "num_base_bdevs_discovered": 1, 00:09:40.746 "num_base_bdevs_operational": 3, 00:09:40.746 "base_bdevs_list": [ 00:09:40.746 { 00:09:40.746 "name": "pt1", 00:09:40.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.746 "is_configured": true, 00:09:40.746 "data_offset": 2048, 00:09:40.746 "data_size": 63488 00:09:40.746 }, 00:09:40.746 { 00:09:40.746 "name": null, 00:09:40.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.746 "is_configured": false, 00:09:40.746 "data_offset": 2048, 00:09:40.746 "data_size": 63488 00:09:40.746 }, 00:09:40.746 { 00:09:40.746 "name": null, 00:09:40.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.746 "is_configured": false, 00:09:40.746 "data_offset": 2048, 00:09:40.746 "data_size": 63488 00:09:40.746 } 00:09:40.746 ] 00:09:40.746 }' 00:09:40.746 02:42:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.746 02:42:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.316 [2024-12-07 02:42:52.173630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.316 [2024-12-07 02:42:52.173717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.316 [2024-12-07 02:42:52.173741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:41.316 [2024-12-07 02:42:52.173757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.316 [2024-12-07 02:42:52.174237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.316 [2024-12-07 02:42:52.174266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.316 [2024-12-07 02:42:52.174355] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:41.316 [2024-12-07 02:42:52.174388] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.316 pt2 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.316 [2024-12-07 02:42:52.185612] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.316 "name": "raid_bdev1", 00:09:41.316 "uuid": "9c2b561d-f4dd-4568-bcba-4dc79263104a", 00:09:41.316 "strip_size_kb": 0, 00:09:41.316 "state": "configuring", 00:09:41.316 "raid_level": "raid1", 00:09:41.316 "superblock": true, 00:09:41.316 "num_base_bdevs": 3, 00:09:41.316 "num_base_bdevs_discovered": 1, 00:09:41.316 "num_base_bdevs_operational": 3, 00:09:41.316 "base_bdevs_list": [ 00:09:41.316 { 00:09:41.316 "name": "pt1", 00:09:41.316 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.316 "is_configured": true, 00:09:41.316 "data_offset": 2048, 00:09:41.316 "data_size": 63488 00:09:41.316 }, 00:09:41.316 { 00:09:41.316 "name": null, 00:09:41.316 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.316 "is_configured": false, 00:09:41.316 "data_offset": 0, 00:09:41.316 "data_size": 63488 00:09:41.316 }, 00:09:41.316 { 00:09:41.316 "name": null, 00:09:41.316 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.316 "is_configured": false, 00:09:41.316 "data_offset": 2048, 00:09:41.316 "data_size": 63488 00:09:41.316 } 00:09:41.316 ] 00:09:41.316 }' 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.316 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.577 [2024-12-07 02:42:52.588863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.577 [2024-12-07 02:42:52.588923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.577 [2024-12-07 02:42:52.588942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:41.577 [2024-12-07 02:42:52.588951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.577 [2024-12-07 02:42:52.589374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.577 [2024-12-07 02:42:52.589400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.577 [2024-12-07 02:42:52.589476] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:41.577 [2024-12-07 02:42:52.589507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.577 pt2 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.577 [2024-12-07 02:42:52.600846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:41.577 [2024-12-07 02:42:52.600887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.577 [2024-12-07 02:42:52.600905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:41.577 [2024-12-07 02:42:52.600914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.577 [2024-12-07 02:42:52.601250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.577 [2024-12-07 02:42:52.601276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:41.577 [2024-12-07 02:42:52.601337] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:41.577 [2024-12-07 02:42:52.601353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:41.577 [2024-12-07 02:42:52.601446] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:41.577 [2024-12-07 02:42:52.601459] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.577 [2024-12-07 02:42:52.601721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:41.577 [2024-12-07 02:42:52.601852] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:41.577 [2024-12-07 02:42:52.601871] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:09:41.577 [2024-12-07 02:42:52.601977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.577 pt3 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.577 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.838 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.838 "name": "raid_bdev1", 00:09:41.838 "uuid": "9c2b561d-f4dd-4568-bcba-4dc79263104a", 00:09:41.838 "strip_size_kb": 0, 00:09:41.838 "state": "online", 00:09:41.838 "raid_level": "raid1", 00:09:41.838 "superblock": true, 00:09:41.838 "num_base_bdevs": 3, 00:09:41.838 "num_base_bdevs_discovered": 3, 00:09:41.838 "num_base_bdevs_operational": 3, 00:09:41.838 "base_bdevs_list": [ 00:09:41.838 { 00:09:41.838 "name": "pt1", 00:09:41.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.838 "is_configured": true, 00:09:41.838 "data_offset": 2048, 00:09:41.838 "data_size": 63488 00:09:41.838 }, 00:09:41.838 { 00:09:41.838 "name": "pt2", 00:09:41.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.838 "is_configured": true, 00:09:41.838 "data_offset": 2048, 00:09:41.838 "data_size": 63488 00:09:41.838 }, 00:09:41.838 { 00:09:41.838 "name": "pt3", 00:09:41.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.838 "is_configured": true, 00:09:41.838 "data_offset": 2048, 00:09:41.838 "data_size": 63488 00:09:41.838 } 00:09:41.838 ] 00:09:41.838 }' 00:09:41.838 02:42:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.838 02:42:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.098 [2024-12-07 02:42:53.040473] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.098 "name": "raid_bdev1", 00:09:42.098 "aliases": [ 00:09:42.098 "9c2b561d-f4dd-4568-bcba-4dc79263104a" 00:09:42.098 ], 00:09:42.098 "product_name": "Raid Volume", 00:09:42.098 "block_size": 512, 00:09:42.098 "num_blocks": 63488, 00:09:42.098 "uuid": "9c2b561d-f4dd-4568-bcba-4dc79263104a", 00:09:42.098 "assigned_rate_limits": { 00:09:42.098 "rw_ios_per_sec": 0, 00:09:42.098 "rw_mbytes_per_sec": 0, 00:09:42.098 "r_mbytes_per_sec": 0, 00:09:42.098 "w_mbytes_per_sec": 0 00:09:42.098 }, 00:09:42.098 "claimed": false, 00:09:42.098 "zoned": false, 00:09:42.098 "supported_io_types": { 00:09:42.098 "read": true, 00:09:42.098 "write": true, 00:09:42.098 "unmap": false, 00:09:42.098 "flush": false, 00:09:42.098 "reset": true, 00:09:42.098 "nvme_admin": false, 00:09:42.098 "nvme_io": false, 00:09:42.098 "nvme_io_md": false, 00:09:42.098 "write_zeroes": true, 00:09:42.098 "zcopy": false, 00:09:42.098 "get_zone_info": false, 00:09:42.098 "zone_management": false, 00:09:42.098 "zone_append": false, 00:09:42.098 "compare": false, 00:09:42.098 "compare_and_write": false, 00:09:42.098 "abort": false, 00:09:42.098 "seek_hole": false, 00:09:42.098 "seek_data": false, 00:09:42.098 "copy": false, 00:09:42.098 "nvme_iov_md": false 00:09:42.098 }, 00:09:42.098 "memory_domains": [ 00:09:42.098 { 00:09:42.098 "dma_device_id": "system", 00:09:42.098 "dma_device_type": 1 00:09:42.098 }, 00:09:42.098 { 00:09:42.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.098 "dma_device_type": 2 00:09:42.098 }, 00:09:42.098 { 00:09:42.098 "dma_device_id": "system", 00:09:42.098 "dma_device_type": 1 00:09:42.098 }, 00:09:42.098 { 00:09:42.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.098 "dma_device_type": 2 00:09:42.098 }, 00:09:42.098 { 00:09:42.098 "dma_device_id": "system", 00:09:42.098 "dma_device_type": 1 00:09:42.098 }, 00:09:42.098 { 00:09:42.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.098 "dma_device_type": 2 00:09:42.098 } 00:09:42.098 ], 00:09:42.098 "driver_specific": { 00:09:42.098 "raid": { 00:09:42.098 "uuid": "9c2b561d-f4dd-4568-bcba-4dc79263104a", 00:09:42.098 "strip_size_kb": 0, 00:09:42.098 "state": "online", 00:09:42.098 "raid_level": "raid1", 00:09:42.098 "superblock": true, 00:09:42.098 "num_base_bdevs": 3, 00:09:42.098 "num_base_bdevs_discovered": 3, 00:09:42.098 "num_base_bdevs_operational": 3, 00:09:42.098 "base_bdevs_list": [ 00:09:42.098 { 00:09:42.098 "name": "pt1", 00:09:42.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.098 "is_configured": true, 00:09:42.098 "data_offset": 2048, 00:09:42.098 "data_size": 63488 00:09:42.098 }, 00:09:42.098 { 00:09:42.098 "name": "pt2", 00:09:42.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.098 "is_configured": true, 00:09:42.098 "data_offset": 2048, 00:09:42.098 "data_size": 63488 00:09:42.098 }, 00:09:42.098 { 00:09:42.098 "name": "pt3", 00:09:42.098 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.098 "is_configured": true, 00:09:42.098 "data_offset": 2048, 00:09:42.098 "data_size": 63488 00:09:42.098 } 00:09:42.098 ] 00:09:42.098 } 00:09:42.098 } 00:09:42.098 }' 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:42.098 pt2 00:09:42.098 pt3' 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.098 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.359 [2024-12-07 02:42:53.319932] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9c2b561d-f4dd-4568-bcba-4dc79263104a '!=' 9c2b561d-f4dd-4568-bcba-4dc79263104a ']' 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.359 [2024-12-07 02:42:53.343724] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.359 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.359 "name": "raid_bdev1", 00:09:42.360 "uuid": "9c2b561d-f4dd-4568-bcba-4dc79263104a", 00:09:42.360 "strip_size_kb": 0, 00:09:42.360 "state": "online", 00:09:42.360 "raid_level": "raid1", 00:09:42.360 "superblock": true, 00:09:42.360 "num_base_bdevs": 3, 00:09:42.360 "num_base_bdevs_discovered": 2, 00:09:42.360 "num_base_bdevs_operational": 2, 00:09:42.360 "base_bdevs_list": [ 00:09:42.360 { 00:09:42.360 "name": null, 00:09:42.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.360 "is_configured": false, 00:09:42.360 "data_offset": 0, 00:09:42.360 "data_size": 63488 00:09:42.360 }, 00:09:42.360 { 00:09:42.360 "name": "pt2", 00:09:42.360 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.360 "is_configured": true, 00:09:42.360 "data_offset": 2048, 00:09:42.360 "data_size": 63488 00:09:42.360 }, 00:09:42.360 { 00:09:42.360 "name": "pt3", 00:09:42.360 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.360 "is_configured": true, 00:09:42.360 "data_offset": 2048, 00:09:42.360 "data_size": 63488 00:09:42.360 } 00:09:42.360 ] 00:09:42.360 }' 00:09:42.360 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.360 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.931 [2024-12-07 02:42:53.775028] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.931 [2024-12-07 02:42:53.775072] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.931 [2024-12-07 02:42:53.775170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.931 [2024-12-07 02:42:53.775243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.931 [2024-12-07 02:42:53.775265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.931 [2024-12-07 02:42:53.858837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.931 [2024-12-07 02:42:53.858896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.931 [2024-12-07 02:42:53.858914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:42.931 [2024-12-07 02:42:53.858924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.931 [2024-12-07 02:42:53.861480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.931 [2024-12-07 02:42:53.861515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.931 [2024-12-07 02:42:53.861604] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:42.931 [2024-12-07 02:42:53.861643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.931 pt2 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.931 "name": "raid_bdev1", 00:09:42.931 "uuid": "9c2b561d-f4dd-4568-bcba-4dc79263104a", 00:09:42.931 "strip_size_kb": 0, 00:09:42.931 "state": "configuring", 00:09:42.931 "raid_level": "raid1", 00:09:42.931 "superblock": true, 00:09:42.931 "num_base_bdevs": 3, 00:09:42.931 "num_base_bdevs_discovered": 1, 00:09:42.931 "num_base_bdevs_operational": 2, 00:09:42.931 "base_bdevs_list": [ 00:09:42.931 { 00:09:42.931 "name": null, 00:09:42.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.931 "is_configured": false, 00:09:42.931 "data_offset": 2048, 00:09:42.931 "data_size": 63488 00:09:42.931 }, 00:09:42.931 { 00:09:42.931 "name": "pt2", 00:09:42.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.931 "is_configured": true, 00:09:42.931 "data_offset": 2048, 00:09:42.931 "data_size": 63488 00:09:42.931 }, 00:09:42.931 { 00:09:42.931 "name": null, 00:09:42.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.931 "is_configured": false, 00:09:42.931 "data_offset": 2048, 00:09:42.931 "data_size": 63488 00:09:42.931 } 00:09:42.931 ] 00:09:42.931 }' 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.931 02:42:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.501 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:43.501 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:43.501 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:43.501 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:43.501 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.501 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.501 [2024-12-07 02:42:54.318089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:43.501 [2024-12-07 02:42:54.318150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.501 [2024-12-07 02:42:54.318175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:43.501 [2024-12-07 02:42:54.318184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.501 [2024-12-07 02:42:54.318639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.501 [2024-12-07 02:42:54.318664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:43.501 [2024-12-07 02:42:54.318746] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:43.502 [2024-12-07 02:42:54.318774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:43.502 [2024-12-07 02:42:54.318870] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:43.502 [2024-12-07 02:42:54.318882] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.502 [2024-12-07 02:42:54.319147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:43.502 [2024-12-07 02:42:54.319284] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:43.502 [2024-12-07 02:42:54.319299] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:43.502 [2024-12-07 02:42:54.319409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.502 pt3 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.502 "name": "raid_bdev1", 00:09:43.502 "uuid": "9c2b561d-f4dd-4568-bcba-4dc79263104a", 00:09:43.502 "strip_size_kb": 0, 00:09:43.502 "state": "online", 00:09:43.502 "raid_level": "raid1", 00:09:43.502 "superblock": true, 00:09:43.502 "num_base_bdevs": 3, 00:09:43.502 "num_base_bdevs_discovered": 2, 00:09:43.502 "num_base_bdevs_operational": 2, 00:09:43.502 "base_bdevs_list": [ 00:09:43.502 { 00:09:43.502 "name": null, 00:09:43.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.502 "is_configured": false, 00:09:43.502 "data_offset": 2048, 00:09:43.502 "data_size": 63488 00:09:43.502 }, 00:09:43.502 { 00:09:43.502 "name": "pt2", 00:09:43.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.502 "is_configured": true, 00:09:43.502 "data_offset": 2048, 00:09:43.502 "data_size": 63488 00:09:43.502 }, 00:09:43.502 { 00:09:43.502 "name": "pt3", 00:09:43.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.502 "is_configured": true, 00:09:43.502 "data_offset": 2048, 00:09:43.502 "data_size": 63488 00:09:43.502 } 00:09:43.502 ] 00:09:43.502 }' 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.502 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.762 [2024-12-07 02:42:54.709385] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.762 [2024-12-07 02:42:54.709416] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.762 [2024-12-07 02:42:54.709485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.762 [2024-12-07 02:42:54.709542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.762 [2024-12-07 02:42:54.709558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.762 [2024-12-07 02:42:54.781242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:43.762 [2024-12-07 02:42:54.781316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.762 [2024-12-07 02:42:54.781332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:43.762 [2024-12-07 02:42:54.781344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.762 [2024-12-07 02:42:54.783821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.762 [2024-12-07 02:42:54.783856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:43.762 [2024-12-07 02:42:54.783923] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:43.762 [2024-12-07 02:42:54.783964] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:43.762 [2024-12-07 02:42:54.784066] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:43.762 [2024-12-07 02:42:54.784082] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.762 [2024-12-07 02:42:54.784101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:09:43.762 [2024-12-07 02:42:54.784137] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:43.762 pt1 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.762 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.022 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.022 "name": "raid_bdev1", 00:09:44.022 "uuid": "9c2b561d-f4dd-4568-bcba-4dc79263104a", 00:09:44.022 "strip_size_kb": 0, 00:09:44.022 "state": "configuring", 00:09:44.022 "raid_level": "raid1", 00:09:44.022 "superblock": true, 00:09:44.022 "num_base_bdevs": 3, 00:09:44.022 "num_base_bdevs_discovered": 1, 00:09:44.022 "num_base_bdevs_operational": 2, 00:09:44.022 "base_bdevs_list": [ 00:09:44.022 { 00:09:44.022 "name": null, 00:09:44.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.022 "is_configured": false, 00:09:44.022 "data_offset": 2048, 00:09:44.022 "data_size": 63488 00:09:44.022 }, 00:09:44.022 { 00:09:44.022 "name": "pt2", 00:09:44.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.022 "is_configured": true, 00:09:44.022 "data_offset": 2048, 00:09:44.022 "data_size": 63488 00:09:44.022 }, 00:09:44.022 { 00:09:44.022 "name": null, 00:09:44.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.022 "is_configured": false, 00:09:44.022 "data_offset": 2048, 00:09:44.022 "data_size": 63488 00:09:44.022 } 00:09:44.022 ] 00:09:44.022 }' 00:09:44.022 02:42:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.022 02:42:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.282 [2024-12-07 02:42:55.280423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:44.282 [2024-12-07 02:42:55.280492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.282 [2024-12-07 02:42:55.280511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:44.282 [2024-12-07 02:42:55.280522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.282 [2024-12-07 02:42:55.280983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.282 [2024-12-07 02:42:55.281016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:44.282 [2024-12-07 02:42:55.281099] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:44.282 [2024-12-07 02:42:55.281153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:44.282 [2024-12-07 02:42:55.281259] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:44.282 [2024-12-07 02:42:55.281275] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:44.282 [2024-12-07 02:42:55.281514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:44.282 [2024-12-07 02:42:55.281673] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:44.282 [2024-12-07 02:42:55.281690] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:44.282 [2024-12-07 02:42:55.281806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.282 pt3 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.282 "name": "raid_bdev1", 00:09:44.282 "uuid": "9c2b561d-f4dd-4568-bcba-4dc79263104a", 00:09:44.282 "strip_size_kb": 0, 00:09:44.282 "state": "online", 00:09:44.282 "raid_level": "raid1", 00:09:44.282 "superblock": true, 00:09:44.282 "num_base_bdevs": 3, 00:09:44.282 "num_base_bdevs_discovered": 2, 00:09:44.282 "num_base_bdevs_operational": 2, 00:09:44.282 "base_bdevs_list": [ 00:09:44.282 { 00:09:44.282 "name": null, 00:09:44.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.282 "is_configured": false, 00:09:44.282 "data_offset": 2048, 00:09:44.282 "data_size": 63488 00:09:44.282 }, 00:09:44.282 { 00:09:44.282 "name": "pt2", 00:09:44.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.282 "is_configured": true, 00:09:44.282 "data_offset": 2048, 00:09:44.282 "data_size": 63488 00:09:44.282 }, 00:09:44.282 { 00:09:44.282 "name": "pt3", 00:09:44.282 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.282 "is_configured": true, 00:09:44.282 "data_offset": 2048, 00:09:44.282 "data_size": 63488 00:09:44.282 } 00:09:44.282 ] 00:09:44.282 }' 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.282 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.851 [2024-12-07 02:42:55.707975] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9c2b561d-f4dd-4568-bcba-4dc79263104a '!=' 9c2b561d-f4dd-4568-bcba-4dc79263104a ']' 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79886 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79886 ']' 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79886 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79886 00:09:44.851 killing process with pid 79886 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79886' 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79886 00:09:44.851 [2024-12-07 02:42:55.772615] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.851 [2024-12-07 02:42:55.772710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.851 [2024-12-07 02:42:55.772776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.851 [2024-12-07 02:42:55.772786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:44.851 02:42:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79886 00:09:44.851 [2024-12-07 02:42:55.834911] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.420 02:42:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:45.420 00:09:45.420 real 0m6.480s 00:09:45.420 user 0m10.512s 00:09:45.420 sys 0m1.449s 00:09:45.420 02:42:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.420 02:42:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.420 ************************************ 00:09:45.420 END TEST raid_superblock_test 00:09:45.420 ************************************ 00:09:45.420 02:42:56 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:45.420 02:42:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:45.420 02:42:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.420 02:42:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.420 ************************************ 00:09:45.421 START TEST raid_read_error_test 00:09:45.421 ************************************ 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pi6cfRJBKe 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80315 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80315 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 80315 ']' 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:45.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:45.421 02:42:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.421 [2024-12-07 02:42:56.368618] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:45.421 [2024-12-07 02:42:56.369120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80315 ] 00:09:45.680 [2024-12-07 02:42:56.529909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.680 [2024-12-07 02:42:56.599665] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.680 [2024-12-07 02:42:56.676122] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.680 [2024-12-07 02:42:56.676165] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.249 BaseBdev1_malloc 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.249 true 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.249 [2024-12-07 02:42:57.218466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:46.249 [2024-12-07 02:42:57.218534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.249 [2024-12-07 02:42:57.218578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:46.249 [2024-12-07 02:42:57.218587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.249 [2024-12-07 02:42:57.221045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.249 [2024-12-07 02:42:57.221081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:46.249 BaseBdev1 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.249 BaseBdev2_malloc 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.249 true 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.249 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.249 [2024-12-07 02:42:57.272246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:46.249 [2024-12-07 02:42:57.272314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.249 [2024-12-07 02:42:57.272332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:46.249 [2024-12-07 02:42:57.272341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.249 [2024-12-07 02:42:57.274709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.250 [2024-12-07 02:42:57.274743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:46.250 BaseBdev2 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.250 BaseBdev3_malloc 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.250 true 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.250 [2024-12-07 02:42:57.318842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:46.250 [2024-12-07 02:42:57.318904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.250 [2024-12-07 02:42:57.318923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:46.250 [2024-12-07 02:42:57.318932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.250 [2024-12-07 02:42:57.321298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.250 [2024-12-07 02:42:57.321333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:46.250 BaseBdev3 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.250 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.509 [2024-12-07 02:42:57.330898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.509 [2024-12-07 02:42:57.333047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.509 [2024-12-07 02:42:57.333146] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.509 [2024-12-07 02:42:57.333323] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:46.509 [2024-12-07 02:42:57.333359] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:46.509 [2024-12-07 02:42:57.333608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:46.509 [2024-12-07 02:42:57.333769] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:46.509 [2024-12-07 02:42:57.333787] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:46.509 [2024-12-07 02:42:57.333915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.509 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.509 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:46.509 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.509 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.509 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.509 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.510 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.510 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.510 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.510 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.510 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.510 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.510 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.510 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.510 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.510 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.510 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.510 "name": "raid_bdev1", 00:09:46.510 "uuid": "f49d43b8-d67b-4267-8a0b-46da9401a806", 00:09:46.510 "strip_size_kb": 0, 00:09:46.510 "state": "online", 00:09:46.510 "raid_level": "raid1", 00:09:46.510 "superblock": true, 00:09:46.510 "num_base_bdevs": 3, 00:09:46.510 "num_base_bdevs_discovered": 3, 00:09:46.510 "num_base_bdevs_operational": 3, 00:09:46.510 "base_bdevs_list": [ 00:09:46.510 { 00:09:46.510 "name": "BaseBdev1", 00:09:46.510 "uuid": "4f270fda-72eb-5960-a0a2-a5176c8315bf", 00:09:46.510 "is_configured": true, 00:09:46.510 "data_offset": 2048, 00:09:46.510 "data_size": 63488 00:09:46.510 }, 00:09:46.510 { 00:09:46.510 "name": "BaseBdev2", 00:09:46.510 "uuid": "1ac3c7a7-c3ff-54ec-ac33-0596098f9434", 00:09:46.510 "is_configured": true, 00:09:46.510 "data_offset": 2048, 00:09:46.510 "data_size": 63488 00:09:46.510 }, 00:09:46.510 { 00:09:46.510 "name": "BaseBdev3", 00:09:46.510 "uuid": "5abcb0fe-fb18-56df-b362-94791aa1e3a8", 00:09:46.510 "is_configured": true, 00:09:46.510 "data_offset": 2048, 00:09:46.510 "data_size": 63488 00:09:46.510 } 00:09:46.510 ] 00:09:46.510 }' 00:09:46.510 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.510 02:42:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.769 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:46.769 02:42:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:47.028 [2024-12-07 02:42:57.854455] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.968 "name": "raid_bdev1", 00:09:47.968 "uuid": "f49d43b8-d67b-4267-8a0b-46da9401a806", 00:09:47.968 "strip_size_kb": 0, 00:09:47.968 "state": "online", 00:09:47.968 "raid_level": "raid1", 00:09:47.968 "superblock": true, 00:09:47.968 "num_base_bdevs": 3, 00:09:47.968 "num_base_bdevs_discovered": 3, 00:09:47.968 "num_base_bdevs_operational": 3, 00:09:47.968 "base_bdevs_list": [ 00:09:47.968 { 00:09:47.968 "name": "BaseBdev1", 00:09:47.968 "uuid": "4f270fda-72eb-5960-a0a2-a5176c8315bf", 00:09:47.968 "is_configured": true, 00:09:47.968 "data_offset": 2048, 00:09:47.968 "data_size": 63488 00:09:47.968 }, 00:09:47.968 { 00:09:47.968 "name": "BaseBdev2", 00:09:47.968 "uuid": "1ac3c7a7-c3ff-54ec-ac33-0596098f9434", 00:09:47.968 "is_configured": true, 00:09:47.968 "data_offset": 2048, 00:09:47.968 "data_size": 63488 00:09:47.968 }, 00:09:47.968 { 00:09:47.968 "name": "BaseBdev3", 00:09:47.968 "uuid": "5abcb0fe-fb18-56df-b362-94791aa1e3a8", 00:09:47.968 "is_configured": true, 00:09:47.968 "data_offset": 2048, 00:09:47.968 "data_size": 63488 00:09:47.968 } 00:09:47.968 ] 00:09:47.968 }' 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.968 02:42:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.228 02:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.228 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.228 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.228 [2024-12-07 02:42:59.231707] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.229 [2024-12-07 02:42:59.231751] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.229 [2024-12-07 02:42:59.234257] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.229 [2024-12-07 02:42:59.234315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.229 [2024-12-07 02:42:59.234421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.229 [2024-12-07 02:42:59.234435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:48.229 { 00:09:48.229 "results": [ 00:09:48.229 { 00:09:48.229 "job": "raid_bdev1", 00:09:48.229 "core_mask": "0x1", 00:09:48.229 "workload": "randrw", 00:09:48.229 "percentage": 50, 00:09:48.229 "status": "finished", 00:09:48.229 "queue_depth": 1, 00:09:48.229 "io_size": 131072, 00:09:48.229 "runtime": 1.377764, 00:09:48.229 "iops": 11157.93415998676, 00:09:48.229 "mibps": 1394.741769998345, 00:09:48.229 "io_failed": 0, 00:09:48.229 "io_timeout": 0, 00:09:48.229 "avg_latency_us": 87.14916306789793, 00:09:48.229 "min_latency_us": 21.463755458515283, 00:09:48.229 "max_latency_us": 1359.3711790393013 00:09:48.229 } 00:09:48.229 ], 00:09:48.229 "core_count": 1 00:09:48.229 } 00:09:48.229 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.229 02:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80315 00:09:48.229 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 80315 ']' 00:09:48.229 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 80315 00:09:48.229 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:48.229 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:48.229 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80315 00:09:48.229 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:48.229 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:48.229 killing process with pid 80315 00:09:48.229 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80315' 00:09:48.229 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 80315 00:09:48.229 [2024-12-07 02:42:59.278961] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.229 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 80315 00:09:48.489 [2024-12-07 02:42:59.328300] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.749 02:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pi6cfRJBKe 00:09:48.749 02:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:48.749 02:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:48.749 02:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:48.749 02:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:48.749 02:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.749 02:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:48.749 02:42:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:48.749 00:09:48.749 real 0m3.442s 00:09:48.749 user 0m4.172s 00:09:48.749 sys 0m0.652s 00:09:48.749 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:48.749 02:42:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.749 ************************************ 00:09:48.749 END TEST raid_read_error_test 00:09:48.749 ************************************ 00:09:48.749 02:42:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:48.749 02:42:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:48.749 02:42:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:48.749 02:42:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.749 ************************************ 00:09:48.749 START TEST raid_write_error_test 00:09:48.749 ************************************ 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ia1ejpN33T 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=80449 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 80449 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 80449 ']' 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.749 02:42:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.010 [2024-12-07 02:42:59.881599] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:49.010 [2024-12-07 02:42:59.881711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80449 ] 00:09:49.010 [2024-12-07 02:43:00.041303] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.268 [2024-12-07 02:43:00.112682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.268 [2024-12-07 02:43:00.188811] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.268 [2024-12-07 02:43:00.188857] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.838 BaseBdev1_malloc 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.838 true 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.838 [2024-12-07 02:43:00.746540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:49.838 [2024-12-07 02:43:00.746632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.838 [2024-12-07 02:43:00.746654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:49.838 [2024-12-07 02:43:00.746663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.838 [2024-12-07 02:43:00.748986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.838 [2024-12-07 02:43:00.749021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:49.838 BaseBdev1 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.838 BaseBdev2_malloc 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.838 true 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.838 [2024-12-07 02:43:00.811008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.838 [2024-12-07 02:43:00.811081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.838 [2024-12-07 02:43:00.811113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.838 [2024-12-07 02:43:00.811128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.838 [2024-12-07 02:43:00.814546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.838 [2024-12-07 02:43:00.814604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.838 BaseBdev2 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.838 BaseBdev3_malloc 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.838 true 00:09:49.838 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.839 [2024-12-07 02:43:00.857727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:49.839 [2024-12-07 02:43:00.857775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.839 [2024-12-07 02:43:00.857793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:49.839 [2024-12-07 02:43:00.857803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.839 [2024-12-07 02:43:00.860199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.839 [2024-12-07 02:43:00.860234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:49.839 BaseBdev3 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.839 [2024-12-07 02:43:00.869774] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.839 [2024-12-07 02:43:00.871906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.839 [2024-12-07 02:43:00.871991] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.839 [2024-12-07 02:43:00.872177] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:09:49.839 [2024-12-07 02:43:00.872212] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:49.839 [2024-12-07 02:43:00.872451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:09:49.839 [2024-12-07 02:43:00.872645] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:09:49.839 [2024-12-07 02:43:00.872663] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:09:49.839 [2024-12-07 02:43:00.872796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.839 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.104 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.104 "name": "raid_bdev1", 00:09:50.104 "uuid": "a8e8634a-6cf7-4e07-8968-7905cab2a6a7", 00:09:50.104 "strip_size_kb": 0, 00:09:50.104 "state": "online", 00:09:50.104 "raid_level": "raid1", 00:09:50.104 "superblock": true, 00:09:50.104 "num_base_bdevs": 3, 00:09:50.104 "num_base_bdevs_discovered": 3, 00:09:50.104 "num_base_bdevs_operational": 3, 00:09:50.104 "base_bdevs_list": [ 00:09:50.104 { 00:09:50.104 "name": "BaseBdev1", 00:09:50.104 "uuid": "4173535e-d107-5ff4-a688-12272fab174c", 00:09:50.104 "is_configured": true, 00:09:50.104 "data_offset": 2048, 00:09:50.104 "data_size": 63488 00:09:50.104 }, 00:09:50.104 { 00:09:50.104 "name": "BaseBdev2", 00:09:50.104 "uuid": "23c3186f-de19-5cf1-98b2-e391eddb77ba", 00:09:50.104 "is_configured": true, 00:09:50.104 "data_offset": 2048, 00:09:50.104 "data_size": 63488 00:09:50.104 }, 00:09:50.104 { 00:09:50.104 "name": "BaseBdev3", 00:09:50.104 "uuid": "985bb6a2-c526-54a5-9429-bcbfefeb9646", 00:09:50.104 "is_configured": true, 00:09:50.104 "data_offset": 2048, 00:09:50.104 "data_size": 63488 00:09:50.104 } 00:09:50.104 ] 00:09:50.104 }' 00:09:50.104 02:43:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.104 02:43:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.381 02:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:50.381 02:43:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:50.381 [2024-12-07 02:43:01.433252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.337 [2024-12-07 02:43:02.364064] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:51.337 [2024-12-07 02:43:02.364140] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.337 [2024-12-07 02:43:02.364379] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.337 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.597 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.597 "name": "raid_bdev1", 00:09:51.597 "uuid": "a8e8634a-6cf7-4e07-8968-7905cab2a6a7", 00:09:51.597 "strip_size_kb": 0, 00:09:51.597 "state": "online", 00:09:51.597 "raid_level": "raid1", 00:09:51.597 "superblock": true, 00:09:51.597 "num_base_bdevs": 3, 00:09:51.597 "num_base_bdevs_discovered": 2, 00:09:51.597 "num_base_bdevs_operational": 2, 00:09:51.597 "base_bdevs_list": [ 00:09:51.597 { 00:09:51.597 "name": null, 00:09:51.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.597 "is_configured": false, 00:09:51.597 "data_offset": 0, 00:09:51.597 "data_size": 63488 00:09:51.597 }, 00:09:51.597 { 00:09:51.597 "name": "BaseBdev2", 00:09:51.597 "uuid": "23c3186f-de19-5cf1-98b2-e391eddb77ba", 00:09:51.597 "is_configured": true, 00:09:51.597 "data_offset": 2048, 00:09:51.597 "data_size": 63488 00:09:51.597 }, 00:09:51.597 { 00:09:51.597 "name": "BaseBdev3", 00:09:51.597 "uuid": "985bb6a2-c526-54a5-9429-bcbfefeb9646", 00:09:51.597 "is_configured": true, 00:09:51.597 "data_offset": 2048, 00:09:51.597 "data_size": 63488 00:09:51.597 } 00:09:51.597 ] 00:09:51.597 }' 00:09:51.597 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.597 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.857 [2024-12-07 02:43:02.803582] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.857 [2024-12-07 02:43:02.803642] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.857 [2024-12-07 02:43:02.806099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.857 [2024-12-07 02:43:02.806161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.857 [2024-12-07 02:43:02.806252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.857 [2024-12-07 02:43:02.806263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:09:51.857 { 00:09:51.857 "results": [ 00:09:51.857 { 00:09:51.857 "job": "raid_bdev1", 00:09:51.857 "core_mask": "0x1", 00:09:51.857 "workload": "randrw", 00:09:51.857 "percentage": 50, 00:09:51.857 "status": "finished", 00:09:51.857 "queue_depth": 1, 00:09:51.857 "io_size": 131072, 00:09:51.857 "runtime": 1.371083, 00:09:51.857 "iops": 12671.005329363721, 00:09:51.857 "mibps": 1583.8756661704651, 00:09:51.857 "io_failed": 0, 00:09:51.857 "io_timeout": 0, 00:09:51.857 "avg_latency_us": 76.45210685556592, 00:09:51.857 "min_latency_us": 21.799126637554586, 00:09:51.857 "max_latency_us": 1337.907423580786 00:09:51.857 } 00:09:51.857 ], 00:09:51.857 "core_count": 1 00:09:51.857 } 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 80449 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 80449 ']' 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 80449 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80449 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.857 killing process with pid 80449 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80449' 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 80449 00:09:51.857 [2024-12-07 02:43:02.833968] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.857 02:43:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 80449 00:09:51.857 [2024-12-07 02:43:02.880723] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.426 02:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ia1ejpN33T 00:09:52.426 02:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:52.426 02:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:52.426 02:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:52.426 02:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:52.426 02:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.426 02:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:52.426 02:43:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:52.426 00:09:52.426 real 0m3.478s 00:09:52.426 user 0m4.253s 00:09:52.426 sys 0m0.612s 00:09:52.426 02:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.426 02:43:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.426 ************************************ 00:09:52.426 END TEST raid_write_error_test 00:09:52.426 ************************************ 00:09:52.426 02:43:03 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:52.426 02:43:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:52.426 02:43:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:52.426 02:43:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:52.426 02:43:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.426 02:43:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.426 ************************************ 00:09:52.426 START TEST raid_state_function_test 00:09:52.426 ************************************ 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80583 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80583' 00:09:52.426 Process raid pid: 80583 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80583 00:09:52.426 02:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80583 ']' 00:09:52.427 02:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.427 02:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.427 02:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.427 02:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.427 02:43:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.427 [2024-12-07 02:43:03.422359] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:09:52.427 [2024-12-07 02:43:03.422562] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.686 [2024-12-07 02:43:03.581404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.686 [2024-12-07 02:43:03.650385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.686 [2024-12-07 02:43:03.727338] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.686 [2024-12-07 02:43:03.727492] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.253 [2024-12-07 02:43:04.258771] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.253 [2024-12-07 02:43:04.258841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.253 [2024-12-07 02:43:04.258870] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.253 [2024-12-07 02:43:04.258880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.253 [2024-12-07 02:43:04.258886] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.253 [2024-12-07 02:43:04.258899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.253 [2024-12-07 02:43:04.258905] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:53.253 [2024-12-07 02:43:04.258913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.253 "name": "Existed_Raid", 00:09:53.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.253 "strip_size_kb": 64, 00:09:53.253 "state": "configuring", 00:09:53.253 "raid_level": "raid0", 00:09:53.253 "superblock": false, 00:09:53.253 "num_base_bdevs": 4, 00:09:53.253 "num_base_bdevs_discovered": 0, 00:09:53.253 "num_base_bdevs_operational": 4, 00:09:53.253 "base_bdevs_list": [ 00:09:53.253 { 00:09:53.253 "name": "BaseBdev1", 00:09:53.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.253 "is_configured": false, 00:09:53.253 "data_offset": 0, 00:09:53.253 "data_size": 0 00:09:53.253 }, 00:09:53.253 { 00:09:53.253 "name": "BaseBdev2", 00:09:53.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.253 "is_configured": false, 00:09:53.253 "data_offset": 0, 00:09:53.253 "data_size": 0 00:09:53.253 }, 00:09:53.253 { 00:09:53.253 "name": "BaseBdev3", 00:09:53.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.253 "is_configured": false, 00:09:53.253 "data_offset": 0, 00:09:53.253 "data_size": 0 00:09:53.253 }, 00:09:53.253 { 00:09:53.253 "name": "BaseBdev4", 00:09:53.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.253 "is_configured": false, 00:09:53.253 "data_offset": 0, 00:09:53.253 "data_size": 0 00:09:53.253 } 00:09:53.253 ] 00:09:53.253 }' 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.253 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.821 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.821 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.821 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.821 [2024-12-07 02:43:04.737824] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.821 [2024-12-07 02:43:04.737932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:09:53.821 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.821 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:53.821 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.821 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.821 [2024-12-07 02:43:04.749845] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.821 [2024-12-07 02:43:04.749928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.821 [2024-12-07 02:43:04.749959] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.821 [2024-12-07 02:43:04.749983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.821 [2024-12-07 02:43:04.750008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.821 [2024-12-07 02:43:04.750037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.821 [2024-12-07 02:43:04.750061] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:53.821 [2024-12-07 02:43:04.750084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:53.821 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.821 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:53.821 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.822 [2024-12-07 02:43:04.776963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.822 BaseBdev1 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.822 [ 00:09:53.822 { 00:09:53.822 "name": "BaseBdev1", 00:09:53.822 "aliases": [ 00:09:53.822 "19139010-71bf-4d9f-b0ec-e94bc9725b27" 00:09:53.822 ], 00:09:53.822 "product_name": "Malloc disk", 00:09:53.822 "block_size": 512, 00:09:53.822 "num_blocks": 65536, 00:09:53.822 "uuid": "19139010-71bf-4d9f-b0ec-e94bc9725b27", 00:09:53.822 "assigned_rate_limits": { 00:09:53.822 "rw_ios_per_sec": 0, 00:09:53.822 "rw_mbytes_per_sec": 0, 00:09:53.822 "r_mbytes_per_sec": 0, 00:09:53.822 "w_mbytes_per_sec": 0 00:09:53.822 }, 00:09:53.822 "claimed": true, 00:09:53.822 "claim_type": "exclusive_write", 00:09:53.822 "zoned": false, 00:09:53.822 "supported_io_types": { 00:09:53.822 "read": true, 00:09:53.822 "write": true, 00:09:53.822 "unmap": true, 00:09:53.822 "flush": true, 00:09:53.822 "reset": true, 00:09:53.822 "nvme_admin": false, 00:09:53.822 "nvme_io": false, 00:09:53.822 "nvme_io_md": false, 00:09:53.822 "write_zeroes": true, 00:09:53.822 "zcopy": true, 00:09:53.822 "get_zone_info": false, 00:09:53.822 "zone_management": false, 00:09:53.822 "zone_append": false, 00:09:53.822 "compare": false, 00:09:53.822 "compare_and_write": false, 00:09:53.822 "abort": true, 00:09:53.822 "seek_hole": false, 00:09:53.822 "seek_data": false, 00:09:53.822 "copy": true, 00:09:53.822 "nvme_iov_md": false 00:09:53.822 }, 00:09:53.822 "memory_domains": [ 00:09:53.822 { 00:09:53.822 "dma_device_id": "system", 00:09:53.822 "dma_device_type": 1 00:09:53.822 }, 00:09:53.822 { 00:09:53.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.822 "dma_device_type": 2 00:09:53.822 } 00:09:53.822 ], 00:09:53.822 "driver_specific": {} 00:09:53.822 } 00:09:53.822 ] 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.822 "name": "Existed_Raid", 00:09:53.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.822 "strip_size_kb": 64, 00:09:53.822 "state": "configuring", 00:09:53.822 "raid_level": "raid0", 00:09:53.822 "superblock": false, 00:09:53.822 "num_base_bdevs": 4, 00:09:53.822 "num_base_bdevs_discovered": 1, 00:09:53.822 "num_base_bdevs_operational": 4, 00:09:53.822 "base_bdevs_list": [ 00:09:53.822 { 00:09:53.822 "name": "BaseBdev1", 00:09:53.822 "uuid": "19139010-71bf-4d9f-b0ec-e94bc9725b27", 00:09:53.822 "is_configured": true, 00:09:53.822 "data_offset": 0, 00:09:53.822 "data_size": 65536 00:09:53.822 }, 00:09:53.822 { 00:09:53.822 "name": "BaseBdev2", 00:09:53.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.822 "is_configured": false, 00:09:53.822 "data_offset": 0, 00:09:53.822 "data_size": 0 00:09:53.822 }, 00:09:53.822 { 00:09:53.822 "name": "BaseBdev3", 00:09:53.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.822 "is_configured": false, 00:09:53.822 "data_offset": 0, 00:09:53.822 "data_size": 0 00:09:53.822 }, 00:09:53.822 { 00:09:53.822 "name": "BaseBdev4", 00:09:53.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.822 "is_configured": false, 00:09:53.822 "data_offset": 0, 00:09:53.822 "data_size": 0 00:09:53.822 } 00:09:53.822 ] 00:09:53.822 }' 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.822 02:43:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.391 [2024-12-07 02:43:05.244184] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.391 [2024-12-07 02:43:05.244293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.391 [2024-12-07 02:43:05.256206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.391 [2024-12-07 02:43:05.258379] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.391 [2024-12-07 02:43:05.258452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.391 [2024-12-07 02:43:05.258480] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.391 [2024-12-07 02:43:05.258501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.391 [2024-12-07 02:43:05.258518] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:54.391 [2024-12-07 02:43:05.258537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.391 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.391 "name": "Existed_Raid", 00:09:54.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.391 "strip_size_kb": 64, 00:09:54.391 "state": "configuring", 00:09:54.391 "raid_level": "raid0", 00:09:54.391 "superblock": false, 00:09:54.391 "num_base_bdevs": 4, 00:09:54.391 "num_base_bdevs_discovered": 1, 00:09:54.391 "num_base_bdevs_operational": 4, 00:09:54.391 "base_bdevs_list": [ 00:09:54.391 { 00:09:54.391 "name": "BaseBdev1", 00:09:54.392 "uuid": "19139010-71bf-4d9f-b0ec-e94bc9725b27", 00:09:54.392 "is_configured": true, 00:09:54.392 "data_offset": 0, 00:09:54.392 "data_size": 65536 00:09:54.392 }, 00:09:54.392 { 00:09:54.392 "name": "BaseBdev2", 00:09:54.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.392 "is_configured": false, 00:09:54.392 "data_offset": 0, 00:09:54.392 "data_size": 0 00:09:54.392 }, 00:09:54.392 { 00:09:54.392 "name": "BaseBdev3", 00:09:54.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.392 "is_configured": false, 00:09:54.392 "data_offset": 0, 00:09:54.392 "data_size": 0 00:09:54.392 }, 00:09:54.392 { 00:09:54.392 "name": "BaseBdev4", 00:09:54.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.392 "is_configured": false, 00:09:54.392 "data_offset": 0, 00:09:54.392 "data_size": 0 00:09:54.392 } 00:09:54.392 ] 00:09:54.392 }' 00:09:54.392 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.392 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.651 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:54.651 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.651 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.909 [2024-12-07 02:43:05.737667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.909 BaseBdev2 00:09:54.909 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.909 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:54.909 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:54.909 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:54.909 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:54.909 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:54.909 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:54.909 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:54.909 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.909 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.910 [ 00:09:54.910 { 00:09:54.910 "name": "BaseBdev2", 00:09:54.910 "aliases": [ 00:09:54.910 "c87ac0c4-0dee-4259-b228-e118690a5dcf" 00:09:54.910 ], 00:09:54.910 "product_name": "Malloc disk", 00:09:54.910 "block_size": 512, 00:09:54.910 "num_blocks": 65536, 00:09:54.910 "uuid": "c87ac0c4-0dee-4259-b228-e118690a5dcf", 00:09:54.910 "assigned_rate_limits": { 00:09:54.910 "rw_ios_per_sec": 0, 00:09:54.910 "rw_mbytes_per_sec": 0, 00:09:54.910 "r_mbytes_per_sec": 0, 00:09:54.910 "w_mbytes_per_sec": 0 00:09:54.910 }, 00:09:54.910 "claimed": true, 00:09:54.910 "claim_type": "exclusive_write", 00:09:54.910 "zoned": false, 00:09:54.910 "supported_io_types": { 00:09:54.910 "read": true, 00:09:54.910 "write": true, 00:09:54.910 "unmap": true, 00:09:54.910 "flush": true, 00:09:54.910 "reset": true, 00:09:54.910 "nvme_admin": false, 00:09:54.910 "nvme_io": false, 00:09:54.910 "nvme_io_md": false, 00:09:54.910 "write_zeroes": true, 00:09:54.910 "zcopy": true, 00:09:54.910 "get_zone_info": false, 00:09:54.910 "zone_management": false, 00:09:54.910 "zone_append": false, 00:09:54.910 "compare": false, 00:09:54.910 "compare_and_write": false, 00:09:54.910 "abort": true, 00:09:54.910 "seek_hole": false, 00:09:54.910 "seek_data": false, 00:09:54.910 "copy": true, 00:09:54.910 "nvme_iov_md": false 00:09:54.910 }, 00:09:54.910 "memory_domains": [ 00:09:54.910 { 00:09:54.910 "dma_device_id": "system", 00:09:54.910 "dma_device_type": 1 00:09:54.910 }, 00:09:54.910 { 00:09:54.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.910 "dma_device_type": 2 00:09:54.910 } 00:09:54.910 ], 00:09:54.910 "driver_specific": {} 00:09:54.910 } 00:09:54.910 ] 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.910 "name": "Existed_Raid", 00:09:54.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.910 "strip_size_kb": 64, 00:09:54.910 "state": "configuring", 00:09:54.910 "raid_level": "raid0", 00:09:54.910 "superblock": false, 00:09:54.910 "num_base_bdevs": 4, 00:09:54.910 "num_base_bdevs_discovered": 2, 00:09:54.910 "num_base_bdevs_operational": 4, 00:09:54.910 "base_bdevs_list": [ 00:09:54.910 { 00:09:54.910 "name": "BaseBdev1", 00:09:54.910 "uuid": "19139010-71bf-4d9f-b0ec-e94bc9725b27", 00:09:54.910 "is_configured": true, 00:09:54.910 "data_offset": 0, 00:09:54.910 "data_size": 65536 00:09:54.910 }, 00:09:54.910 { 00:09:54.910 "name": "BaseBdev2", 00:09:54.910 "uuid": "c87ac0c4-0dee-4259-b228-e118690a5dcf", 00:09:54.910 "is_configured": true, 00:09:54.910 "data_offset": 0, 00:09:54.910 "data_size": 65536 00:09:54.910 }, 00:09:54.910 { 00:09:54.910 "name": "BaseBdev3", 00:09:54.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.910 "is_configured": false, 00:09:54.910 "data_offset": 0, 00:09:54.910 "data_size": 0 00:09:54.910 }, 00:09:54.910 { 00:09:54.910 "name": "BaseBdev4", 00:09:54.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.910 "is_configured": false, 00:09:54.910 "data_offset": 0, 00:09:54.910 "data_size": 0 00:09:54.910 } 00:09:54.910 ] 00:09:54.910 }' 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.910 02:43:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.168 BaseBdev3 00:09:55.168 [2024-12-07 02:43:06.169750] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.168 [ 00:09:55.168 { 00:09:55.168 "name": "BaseBdev3", 00:09:55.168 "aliases": [ 00:09:55.168 "981a7f26-2fa8-47b8-9d3f-9e7c263ec90f" 00:09:55.168 ], 00:09:55.168 "product_name": "Malloc disk", 00:09:55.168 "block_size": 512, 00:09:55.168 "num_blocks": 65536, 00:09:55.168 "uuid": "981a7f26-2fa8-47b8-9d3f-9e7c263ec90f", 00:09:55.168 "assigned_rate_limits": { 00:09:55.168 "rw_ios_per_sec": 0, 00:09:55.168 "rw_mbytes_per_sec": 0, 00:09:55.168 "r_mbytes_per_sec": 0, 00:09:55.168 "w_mbytes_per_sec": 0 00:09:55.168 }, 00:09:55.168 "claimed": true, 00:09:55.168 "claim_type": "exclusive_write", 00:09:55.168 "zoned": false, 00:09:55.168 "supported_io_types": { 00:09:55.168 "read": true, 00:09:55.168 "write": true, 00:09:55.168 "unmap": true, 00:09:55.168 "flush": true, 00:09:55.168 "reset": true, 00:09:55.168 "nvme_admin": false, 00:09:55.168 "nvme_io": false, 00:09:55.168 "nvme_io_md": false, 00:09:55.168 "write_zeroes": true, 00:09:55.168 "zcopy": true, 00:09:55.168 "get_zone_info": false, 00:09:55.168 "zone_management": false, 00:09:55.168 "zone_append": false, 00:09:55.168 "compare": false, 00:09:55.168 "compare_and_write": false, 00:09:55.168 "abort": true, 00:09:55.168 "seek_hole": false, 00:09:55.168 "seek_data": false, 00:09:55.168 "copy": true, 00:09:55.168 "nvme_iov_md": false 00:09:55.168 }, 00:09:55.168 "memory_domains": [ 00:09:55.168 { 00:09:55.168 "dma_device_id": "system", 00:09:55.168 "dma_device_type": 1 00:09:55.168 }, 00:09:55.168 { 00:09:55.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.168 "dma_device_type": 2 00:09:55.168 } 00:09:55.168 ], 00:09:55.168 "driver_specific": {} 00:09:55.168 } 00:09:55.168 ] 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.168 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.427 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.427 "name": "Existed_Raid", 00:09:55.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.427 "strip_size_kb": 64, 00:09:55.427 "state": "configuring", 00:09:55.427 "raid_level": "raid0", 00:09:55.427 "superblock": false, 00:09:55.427 "num_base_bdevs": 4, 00:09:55.427 "num_base_bdevs_discovered": 3, 00:09:55.427 "num_base_bdevs_operational": 4, 00:09:55.427 "base_bdevs_list": [ 00:09:55.427 { 00:09:55.427 "name": "BaseBdev1", 00:09:55.427 "uuid": "19139010-71bf-4d9f-b0ec-e94bc9725b27", 00:09:55.427 "is_configured": true, 00:09:55.427 "data_offset": 0, 00:09:55.427 "data_size": 65536 00:09:55.427 }, 00:09:55.427 { 00:09:55.427 "name": "BaseBdev2", 00:09:55.427 "uuid": "c87ac0c4-0dee-4259-b228-e118690a5dcf", 00:09:55.427 "is_configured": true, 00:09:55.427 "data_offset": 0, 00:09:55.427 "data_size": 65536 00:09:55.427 }, 00:09:55.427 { 00:09:55.427 "name": "BaseBdev3", 00:09:55.427 "uuid": "981a7f26-2fa8-47b8-9d3f-9e7c263ec90f", 00:09:55.427 "is_configured": true, 00:09:55.428 "data_offset": 0, 00:09:55.428 "data_size": 65536 00:09:55.428 }, 00:09:55.428 { 00:09:55.428 "name": "BaseBdev4", 00:09:55.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.428 "is_configured": false, 00:09:55.428 "data_offset": 0, 00:09:55.428 "data_size": 0 00:09:55.428 } 00:09:55.428 ] 00:09:55.428 }' 00:09:55.428 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.428 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.687 [2024-12-07 02:43:06.713587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:55.687 [2024-12-07 02:43:06.713647] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:09:55.687 [2024-12-07 02:43:06.713674] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:55.687 [2024-12-07 02:43:06.713989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:55.687 [2024-12-07 02:43:06.714171] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:09:55.687 [2024-12-07 02:43:06.714192] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:09:55.687 [2024-12-07 02:43:06.714424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.687 BaseBdev4 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.687 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.687 [ 00:09:55.687 { 00:09:55.687 "name": "BaseBdev4", 00:09:55.687 "aliases": [ 00:09:55.687 "398b9916-7ac2-4978-b38a-6daaa8f30d79" 00:09:55.687 ], 00:09:55.687 "product_name": "Malloc disk", 00:09:55.687 "block_size": 512, 00:09:55.687 "num_blocks": 65536, 00:09:55.687 "uuid": "398b9916-7ac2-4978-b38a-6daaa8f30d79", 00:09:55.687 "assigned_rate_limits": { 00:09:55.687 "rw_ios_per_sec": 0, 00:09:55.687 "rw_mbytes_per_sec": 0, 00:09:55.687 "r_mbytes_per_sec": 0, 00:09:55.687 "w_mbytes_per_sec": 0 00:09:55.687 }, 00:09:55.687 "claimed": true, 00:09:55.687 "claim_type": "exclusive_write", 00:09:55.687 "zoned": false, 00:09:55.687 "supported_io_types": { 00:09:55.687 "read": true, 00:09:55.687 "write": true, 00:09:55.687 "unmap": true, 00:09:55.687 "flush": true, 00:09:55.687 "reset": true, 00:09:55.687 "nvme_admin": false, 00:09:55.687 "nvme_io": false, 00:09:55.687 "nvme_io_md": false, 00:09:55.687 "write_zeroes": true, 00:09:55.687 "zcopy": true, 00:09:55.687 "get_zone_info": false, 00:09:55.687 "zone_management": false, 00:09:55.687 "zone_append": false, 00:09:55.687 "compare": false, 00:09:55.687 "compare_and_write": false, 00:09:55.687 "abort": true, 00:09:55.687 "seek_hole": false, 00:09:55.687 "seek_data": false, 00:09:55.687 "copy": true, 00:09:55.687 "nvme_iov_md": false 00:09:55.687 }, 00:09:55.687 "memory_domains": [ 00:09:55.687 { 00:09:55.687 "dma_device_id": "system", 00:09:55.687 "dma_device_type": 1 00:09:55.687 }, 00:09:55.687 { 00:09:55.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.688 "dma_device_type": 2 00:09:55.688 } 00:09:55.688 ], 00:09:55.688 "driver_specific": {} 00:09:55.688 } 00:09:55.688 ] 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.688 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.947 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.947 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.947 "name": "Existed_Raid", 00:09:55.947 "uuid": "f70b8f7b-1d7c-45a2-8f7f-07909e62ad7c", 00:09:55.947 "strip_size_kb": 64, 00:09:55.947 "state": "online", 00:09:55.947 "raid_level": "raid0", 00:09:55.947 "superblock": false, 00:09:55.947 "num_base_bdevs": 4, 00:09:55.947 "num_base_bdevs_discovered": 4, 00:09:55.947 "num_base_bdevs_operational": 4, 00:09:55.947 "base_bdevs_list": [ 00:09:55.947 { 00:09:55.947 "name": "BaseBdev1", 00:09:55.947 "uuid": "19139010-71bf-4d9f-b0ec-e94bc9725b27", 00:09:55.947 "is_configured": true, 00:09:55.947 "data_offset": 0, 00:09:55.947 "data_size": 65536 00:09:55.947 }, 00:09:55.947 { 00:09:55.947 "name": "BaseBdev2", 00:09:55.947 "uuid": "c87ac0c4-0dee-4259-b228-e118690a5dcf", 00:09:55.947 "is_configured": true, 00:09:55.947 "data_offset": 0, 00:09:55.947 "data_size": 65536 00:09:55.947 }, 00:09:55.947 { 00:09:55.947 "name": "BaseBdev3", 00:09:55.947 "uuid": "981a7f26-2fa8-47b8-9d3f-9e7c263ec90f", 00:09:55.947 "is_configured": true, 00:09:55.947 "data_offset": 0, 00:09:55.947 "data_size": 65536 00:09:55.947 }, 00:09:55.947 { 00:09:55.947 "name": "BaseBdev4", 00:09:55.947 "uuid": "398b9916-7ac2-4978-b38a-6daaa8f30d79", 00:09:55.947 "is_configured": true, 00:09:55.947 "data_offset": 0, 00:09:55.947 "data_size": 65536 00:09:55.947 } 00:09:55.947 ] 00:09:55.947 }' 00:09:55.947 02:43:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.947 02:43:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.206 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:56.206 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:56.206 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.206 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.206 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.206 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.206 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:56.206 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.206 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.206 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.206 [2024-12-07 02:43:07.169199] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.206 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.206 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.206 "name": "Existed_Raid", 00:09:56.206 "aliases": [ 00:09:56.206 "f70b8f7b-1d7c-45a2-8f7f-07909e62ad7c" 00:09:56.206 ], 00:09:56.206 "product_name": "Raid Volume", 00:09:56.206 "block_size": 512, 00:09:56.206 "num_blocks": 262144, 00:09:56.206 "uuid": "f70b8f7b-1d7c-45a2-8f7f-07909e62ad7c", 00:09:56.206 "assigned_rate_limits": { 00:09:56.206 "rw_ios_per_sec": 0, 00:09:56.206 "rw_mbytes_per_sec": 0, 00:09:56.206 "r_mbytes_per_sec": 0, 00:09:56.206 "w_mbytes_per_sec": 0 00:09:56.206 }, 00:09:56.206 "claimed": false, 00:09:56.206 "zoned": false, 00:09:56.206 "supported_io_types": { 00:09:56.206 "read": true, 00:09:56.206 "write": true, 00:09:56.206 "unmap": true, 00:09:56.206 "flush": true, 00:09:56.206 "reset": true, 00:09:56.206 "nvme_admin": false, 00:09:56.206 "nvme_io": false, 00:09:56.206 "nvme_io_md": false, 00:09:56.206 "write_zeroes": true, 00:09:56.206 "zcopy": false, 00:09:56.206 "get_zone_info": false, 00:09:56.206 "zone_management": false, 00:09:56.206 "zone_append": false, 00:09:56.206 "compare": false, 00:09:56.206 "compare_and_write": false, 00:09:56.206 "abort": false, 00:09:56.206 "seek_hole": false, 00:09:56.206 "seek_data": false, 00:09:56.206 "copy": false, 00:09:56.206 "nvme_iov_md": false 00:09:56.206 }, 00:09:56.206 "memory_domains": [ 00:09:56.206 { 00:09:56.206 "dma_device_id": "system", 00:09:56.206 "dma_device_type": 1 00:09:56.206 }, 00:09:56.206 { 00:09:56.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.206 "dma_device_type": 2 00:09:56.206 }, 00:09:56.206 { 00:09:56.206 "dma_device_id": "system", 00:09:56.206 "dma_device_type": 1 00:09:56.206 }, 00:09:56.206 { 00:09:56.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.206 "dma_device_type": 2 00:09:56.207 }, 00:09:56.207 { 00:09:56.207 "dma_device_id": "system", 00:09:56.207 "dma_device_type": 1 00:09:56.207 }, 00:09:56.207 { 00:09:56.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.207 "dma_device_type": 2 00:09:56.207 }, 00:09:56.207 { 00:09:56.207 "dma_device_id": "system", 00:09:56.207 "dma_device_type": 1 00:09:56.207 }, 00:09:56.207 { 00:09:56.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.207 "dma_device_type": 2 00:09:56.207 } 00:09:56.207 ], 00:09:56.207 "driver_specific": { 00:09:56.207 "raid": { 00:09:56.207 "uuid": "f70b8f7b-1d7c-45a2-8f7f-07909e62ad7c", 00:09:56.207 "strip_size_kb": 64, 00:09:56.207 "state": "online", 00:09:56.207 "raid_level": "raid0", 00:09:56.207 "superblock": false, 00:09:56.207 "num_base_bdevs": 4, 00:09:56.207 "num_base_bdevs_discovered": 4, 00:09:56.207 "num_base_bdevs_operational": 4, 00:09:56.207 "base_bdevs_list": [ 00:09:56.207 { 00:09:56.207 "name": "BaseBdev1", 00:09:56.207 "uuid": "19139010-71bf-4d9f-b0ec-e94bc9725b27", 00:09:56.207 "is_configured": true, 00:09:56.207 "data_offset": 0, 00:09:56.207 "data_size": 65536 00:09:56.207 }, 00:09:56.207 { 00:09:56.207 "name": "BaseBdev2", 00:09:56.207 "uuid": "c87ac0c4-0dee-4259-b228-e118690a5dcf", 00:09:56.207 "is_configured": true, 00:09:56.207 "data_offset": 0, 00:09:56.207 "data_size": 65536 00:09:56.207 }, 00:09:56.207 { 00:09:56.207 "name": "BaseBdev3", 00:09:56.207 "uuid": "981a7f26-2fa8-47b8-9d3f-9e7c263ec90f", 00:09:56.207 "is_configured": true, 00:09:56.207 "data_offset": 0, 00:09:56.207 "data_size": 65536 00:09:56.207 }, 00:09:56.207 { 00:09:56.207 "name": "BaseBdev4", 00:09:56.207 "uuid": "398b9916-7ac2-4978-b38a-6daaa8f30d79", 00:09:56.207 "is_configured": true, 00:09:56.207 "data_offset": 0, 00:09:56.207 "data_size": 65536 00:09:56.207 } 00:09:56.207 ] 00:09:56.207 } 00:09:56.207 } 00:09:56.207 }' 00:09:56.207 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.207 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:56.207 BaseBdev2 00:09:56.207 BaseBdev3 00:09:56.207 BaseBdev4' 00:09:56.207 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.466 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.466 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.466 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:56.466 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.466 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.466 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.466 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.466 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.466 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.466 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.466 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:56.466 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.466 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.467 [2024-12-07 02:43:07.472389] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.467 [2024-12-07 02:43:07.472425] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.467 [2024-12-07 02:43:07.472483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.467 "name": "Existed_Raid", 00:09:56.467 "uuid": "f70b8f7b-1d7c-45a2-8f7f-07909e62ad7c", 00:09:56.467 "strip_size_kb": 64, 00:09:56.467 "state": "offline", 00:09:56.467 "raid_level": "raid0", 00:09:56.467 "superblock": false, 00:09:56.467 "num_base_bdevs": 4, 00:09:56.467 "num_base_bdevs_discovered": 3, 00:09:56.467 "num_base_bdevs_operational": 3, 00:09:56.467 "base_bdevs_list": [ 00:09:56.467 { 00:09:56.467 "name": null, 00:09:56.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.467 "is_configured": false, 00:09:56.467 "data_offset": 0, 00:09:56.467 "data_size": 65536 00:09:56.467 }, 00:09:56.467 { 00:09:56.467 "name": "BaseBdev2", 00:09:56.467 "uuid": "c87ac0c4-0dee-4259-b228-e118690a5dcf", 00:09:56.467 "is_configured": true, 00:09:56.467 "data_offset": 0, 00:09:56.467 "data_size": 65536 00:09:56.467 }, 00:09:56.467 { 00:09:56.467 "name": "BaseBdev3", 00:09:56.467 "uuid": "981a7f26-2fa8-47b8-9d3f-9e7c263ec90f", 00:09:56.467 "is_configured": true, 00:09:56.467 "data_offset": 0, 00:09:56.467 "data_size": 65536 00:09:56.467 }, 00:09:56.467 { 00:09:56.467 "name": "BaseBdev4", 00:09:56.467 "uuid": "398b9916-7ac2-4978-b38a-6daaa8f30d79", 00:09:56.467 "is_configured": true, 00:09:56.467 "data_offset": 0, 00:09:56.467 "data_size": 65536 00:09:56.467 } 00:09:56.467 ] 00:09:56.467 }' 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.467 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.037 [2024-12-07 02:43:07.955897] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.037 02:43:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.037 [2024-12-07 02:43:08.028285] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.037 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.037 [2024-12-07 02:43:08.100523] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:57.037 [2024-12-07 02:43:08.100574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.298 BaseBdev2 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.298 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.299 [ 00:09:57.299 { 00:09:57.299 "name": "BaseBdev2", 00:09:57.299 "aliases": [ 00:09:57.299 "05fe03c2-46bb-4152-8782-2176b0f85237" 00:09:57.299 ], 00:09:57.299 "product_name": "Malloc disk", 00:09:57.299 "block_size": 512, 00:09:57.299 "num_blocks": 65536, 00:09:57.299 "uuid": "05fe03c2-46bb-4152-8782-2176b0f85237", 00:09:57.299 "assigned_rate_limits": { 00:09:57.299 "rw_ios_per_sec": 0, 00:09:57.299 "rw_mbytes_per_sec": 0, 00:09:57.299 "r_mbytes_per_sec": 0, 00:09:57.299 "w_mbytes_per_sec": 0 00:09:57.299 }, 00:09:57.299 "claimed": false, 00:09:57.299 "zoned": false, 00:09:57.299 "supported_io_types": { 00:09:57.299 "read": true, 00:09:57.299 "write": true, 00:09:57.299 "unmap": true, 00:09:57.299 "flush": true, 00:09:57.299 "reset": true, 00:09:57.299 "nvme_admin": false, 00:09:57.299 "nvme_io": false, 00:09:57.299 "nvme_io_md": false, 00:09:57.299 "write_zeroes": true, 00:09:57.299 "zcopy": true, 00:09:57.299 "get_zone_info": false, 00:09:57.299 "zone_management": false, 00:09:57.299 "zone_append": false, 00:09:57.299 "compare": false, 00:09:57.299 "compare_and_write": false, 00:09:57.299 "abort": true, 00:09:57.299 "seek_hole": false, 00:09:57.299 "seek_data": false, 00:09:57.299 "copy": true, 00:09:57.299 "nvme_iov_md": false 00:09:57.299 }, 00:09:57.299 "memory_domains": [ 00:09:57.299 { 00:09:57.299 "dma_device_id": "system", 00:09:57.299 "dma_device_type": 1 00:09:57.299 }, 00:09:57.299 { 00:09:57.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.299 "dma_device_type": 2 00:09:57.299 } 00:09:57.299 ], 00:09:57.299 "driver_specific": {} 00:09:57.299 } 00:09:57.299 ] 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.299 BaseBdev3 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.299 [ 00:09:57.299 { 00:09:57.299 "name": "BaseBdev3", 00:09:57.299 "aliases": [ 00:09:57.299 "0ec6e658-005d-406e-99d0-5e3b05043880" 00:09:57.299 ], 00:09:57.299 "product_name": "Malloc disk", 00:09:57.299 "block_size": 512, 00:09:57.299 "num_blocks": 65536, 00:09:57.299 "uuid": "0ec6e658-005d-406e-99d0-5e3b05043880", 00:09:57.299 "assigned_rate_limits": { 00:09:57.299 "rw_ios_per_sec": 0, 00:09:57.299 "rw_mbytes_per_sec": 0, 00:09:57.299 "r_mbytes_per_sec": 0, 00:09:57.299 "w_mbytes_per_sec": 0 00:09:57.299 }, 00:09:57.299 "claimed": false, 00:09:57.299 "zoned": false, 00:09:57.299 "supported_io_types": { 00:09:57.299 "read": true, 00:09:57.299 "write": true, 00:09:57.299 "unmap": true, 00:09:57.299 "flush": true, 00:09:57.299 "reset": true, 00:09:57.299 "nvme_admin": false, 00:09:57.299 "nvme_io": false, 00:09:57.299 "nvme_io_md": false, 00:09:57.299 "write_zeroes": true, 00:09:57.299 "zcopy": true, 00:09:57.299 "get_zone_info": false, 00:09:57.299 "zone_management": false, 00:09:57.299 "zone_append": false, 00:09:57.299 "compare": false, 00:09:57.299 "compare_and_write": false, 00:09:57.299 "abort": true, 00:09:57.299 "seek_hole": false, 00:09:57.299 "seek_data": false, 00:09:57.299 "copy": true, 00:09:57.299 "nvme_iov_md": false 00:09:57.299 }, 00:09:57.299 "memory_domains": [ 00:09:57.299 { 00:09:57.299 "dma_device_id": "system", 00:09:57.299 "dma_device_type": 1 00:09:57.299 }, 00:09:57.299 { 00:09:57.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.299 "dma_device_type": 2 00:09:57.299 } 00:09:57.299 ], 00:09:57.299 "driver_specific": {} 00:09:57.299 } 00:09:57.299 ] 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.299 BaseBdev4 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:57.299 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.300 [ 00:09:57.300 { 00:09:57.300 "name": "BaseBdev4", 00:09:57.300 "aliases": [ 00:09:57.300 "5438c728-7e03-42c2-a8f4-4872531e723f" 00:09:57.300 ], 00:09:57.300 "product_name": "Malloc disk", 00:09:57.300 "block_size": 512, 00:09:57.300 "num_blocks": 65536, 00:09:57.300 "uuid": "5438c728-7e03-42c2-a8f4-4872531e723f", 00:09:57.300 "assigned_rate_limits": { 00:09:57.300 "rw_ios_per_sec": 0, 00:09:57.300 "rw_mbytes_per_sec": 0, 00:09:57.300 "r_mbytes_per_sec": 0, 00:09:57.300 "w_mbytes_per_sec": 0 00:09:57.300 }, 00:09:57.300 "claimed": false, 00:09:57.300 "zoned": false, 00:09:57.300 "supported_io_types": { 00:09:57.300 "read": true, 00:09:57.300 "write": true, 00:09:57.300 "unmap": true, 00:09:57.300 "flush": true, 00:09:57.300 "reset": true, 00:09:57.300 "nvme_admin": false, 00:09:57.300 "nvme_io": false, 00:09:57.300 "nvme_io_md": false, 00:09:57.300 "write_zeroes": true, 00:09:57.300 "zcopy": true, 00:09:57.300 "get_zone_info": false, 00:09:57.300 "zone_management": false, 00:09:57.300 "zone_append": false, 00:09:57.300 "compare": false, 00:09:57.300 "compare_and_write": false, 00:09:57.300 "abort": true, 00:09:57.300 "seek_hole": false, 00:09:57.300 "seek_data": false, 00:09:57.300 "copy": true, 00:09:57.300 "nvme_iov_md": false 00:09:57.300 }, 00:09:57.300 "memory_domains": [ 00:09:57.300 { 00:09:57.300 "dma_device_id": "system", 00:09:57.300 "dma_device_type": 1 00:09:57.300 }, 00:09:57.300 { 00:09:57.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.300 "dma_device_type": 2 00:09:57.300 } 00:09:57.300 ], 00:09:57.300 "driver_specific": {} 00:09:57.300 } 00:09:57.300 ] 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.300 [2024-12-07 02:43:08.357141] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.300 [2024-12-07 02:43:08.357186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.300 [2024-12-07 02:43:08.357210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.300 [2024-12-07 02:43:08.359312] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.300 [2024-12-07 02:43:08.359365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.300 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.560 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.560 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.560 "name": "Existed_Raid", 00:09:57.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.560 "strip_size_kb": 64, 00:09:57.560 "state": "configuring", 00:09:57.560 "raid_level": "raid0", 00:09:57.560 "superblock": false, 00:09:57.560 "num_base_bdevs": 4, 00:09:57.560 "num_base_bdevs_discovered": 3, 00:09:57.560 "num_base_bdevs_operational": 4, 00:09:57.560 "base_bdevs_list": [ 00:09:57.560 { 00:09:57.560 "name": "BaseBdev1", 00:09:57.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.560 "is_configured": false, 00:09:57.560 "data_offset": 0, 00:09:57.560 "data_size": 0 00:09:57.560 }, 00:09:57.560 { 00:09:57.560 "name": "BaseBdev2", 00:09:57.560 "uuid": "05fe03c2-46bb-4152-8782-2176b0f85237", 00:09:57.560 "is_configured": true, 00:09:57.560 "data_offset": 0, 00:09:57.560 "data_size": 65536 00:09:57.560 }, 00:09:57.560 { 00:09:57.560 "name": "BaseBdev3", 00:09:57.560 "uuid": "0ec6e658-005d-406e-99d0-5e3b05043880", 00:09:57.560 "is_configured": true, 00:09:57.560 "data_offset": 0, 00:09:57.560 "data_size": 65536 00:09:57.560 }, 00:09:57.560 { 00:09:57.560 "name": "BaseBdev4", 00:09:57.560 "uuid": "5438c728-7e03-42c2-a8f4-4872531e723f", 00:09:57.560 "is_configured": true, 00:09:57.560 "data_offset": 0, 00:09:57.560 "data_size": 65536 00:09:57.560 } 00:09:57.560 ] 00:09:57.560 }' 00:09:57.560 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.560 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.820 [2024-12-07 02:43:08.816374] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.820 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.820 "name": "Existed_Raid", 00:09:57.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.820 "strip_size_kb": 64, 00:09:57.820 "state": "configuring", 00:09:57.820 "raid_level": "raid0", 00:09:57.820 "superblock": false, 00:09:57.820 "num_base_bdevs": 4, 00:09:57.820 "num_base_bdevs_discovered": 2, 00:09:57.820 "num_base_bdevs_operational": 4, 00:09:57.820 "base_bdevs_list": [ 00:09:57.820 { 00:09:57.820 "name": "BaseBdev1", 00:09:57.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.820 "is_configured": false, 00:09:57.820 "data_offset": 0, 00:09:57.820 "data_size": 0 00:09:57.820 }, 00:09:57.820 { 00:09:57.820 "name": null, 00:09:57.820 "uuid": "05fe03c2-46bb-4152-8782-2176b0f85237", 00:09:57.820 "is_configured": false, 00:09:57.820 "data_offset": 0, 00:09:57.820 "data_size": 65536 00:09:57.820 }, 00:09:57.820 { 00:09:57.820 "name": "BaseBdev3", 00:09:57.821 "uuid": "0ec6e658-005d-406e-99d0-5e3b05043880", 00:09:57.821 "is_configured": true, 00:09:57.821 "data_offset": 0, 00:09:57.821 "data_size": 65536 00:09:57.821 }, 00:09:57.821 { 00:09:57.821 "name": "BaseBdev4", 00:09:57.821 "uuid": "5438c728-7e03-42c2-a8f4-4872531e723f", 00:09:57.821 "is_configured": true, 00:09:57.821 "data_offset": 0, 00:09:57.821 "data_size": 65536 00:09:57.821 } 00:09:57.821 ] 00:09:57.821 }' 00:09:57.821 02:43:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.821 02:43:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.391 [2024-12-07 02:43:09.284417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.391 BaseBdev1 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.391 [ 00:09:58.391 { 00:09:58.391 "name": "BaseBdev1", 00:09:58.391 "aliases": [ 00:09:58.391 "adff3157-9d72-4906-b496-79b36826f241" 00:09:58.391 ], 00:09:58.391 "product_name": "Malloc disk", 00:09:58.391 "block_size": 512, 00:09:58.391 "num_blocks": 65536, 00:09:58.391 "uuid": "adff3157-9d72-4906-b496-79b36826f241", 00:09:58.391 "assigned_rate_limits": { 00:09:58.391 "rw_ios_per_sec": 0, 00:09:58.391 "rw_mbytes_per_sec": 0, 00:09:58.391 "r_mbytes_per_sec": 0, 00:09:58.391 "w_mbytes_per_sec": 0 00:09:58.391 }, 00:09:58.391 "claimed": true, 00:09:58.391 "claim_type": "exclusive_write", 00:09:58.391 "zoned": false, 00:09:58.391 "supported_io_types": { 00:09:58.391 "read": true, 00:09:58.391 "write": true, 00:09:58.391 "unmap": true, 00:09:58.391 "flush": true, 00:09:58.391 "reset": true, 00:09:58.391 "nvme_admin": false, 00:09:58.391 "nvme_io": false, 00:09:58.391 "nvme_io_md": false, 00:09:58.391 "write_zeroes": true, 00:09:58.391 "zcopy": true, 00:09:58.391 "get_zone_info": false, 00:09:58.391 "zone_management": false, 00:09:58.391 "zone_append": false, 00:09:58.391 "compare": false, 00:09:58.391 "compare_and_write": false, 00:09:58.391 "abort": true, 00:09:58.391 "seek_hole": false, 00:09:58.391 "seek_data": false, 00:09:58.391 "copy": true, 00:09:58.391 "nvme_iov_md": false 00:09:58.391 }, 00:09:58.391 "memory_domains": [ 00:09:58.391 { 00:09:58.391 "dma_device_id": "system", 00:09:58.391 "dma_device_type": 1 00:09:58.391 }, 00:09:58.391 { 00:09:58.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.391 "dma_device_type": 2 00:09:58.391 } 00:09:58.391 ], 00:09:58.391 "driver_specific": {} 00:09:58.391 } 00:09:58.391 ] 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.391 "name": "Existed_Raid", 00:09:58.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.391 "strip_size_kb": 64, 00:09:58.391 "state": "configuring", 00:09:58.391 "raid_level": "raid0", 00:09:58.391 "superblock": false, 00:09:58.391 "num_base_bdevs": 4, 00:09:58.391 "num_base_bdevs_discovered": 3, 00:09:58.391 "num_base_bdevs_operational": 4, 00:09:58.391 "base_bdevs_list": [ 00:09:58.391 { 00:09:58.391 "name": "BaseBdev1", 00:09:58.391 "uuid": "adff3157-9d72-4906-b496-79b36826f241", 00:09:58.391 "is_configured": true, 00:09:58.391 "data_offset": 0, 00:09:58.391 "data_size": 65536 00:09:58.391 }, 00:09:58.391 { 00:09:58.391 "name": null, 00:09:58.391 "uuid": "05fe03c2-46bb-4152-8782-2176b0f85237", 00:09:58.391 "is_configured": false, 00:09:58.391 "data_offset": 0, 00:09:58.391 "data_size": 65536 00:09:58.391 }, 00:09:58.391 { 00:09:58.391 "name": "BaseBdev3", 00:09:58.391 "uuid": "0ec6e658-005d-406e-99d0-5e3b05043880", 00:09:58.391 "is_configured": true, 00:09:58.391 "data_offset": 0, 00:09:58.391 "data_size": 65536 00:09:58.391 }, 00:09:58.391 { 00:09:58.391 "name": "BaseBdev4", 00:09:58.391 "uuid": "5438c728-7e03-42c2-a8f4-4872531e723f", 00:09:58.391 "is_configured": true, 00:09:58.391 "data_offset": 0, 00:09:58.391 "data_size": 65536 00:09:58.391 } 00:09:58.391 ] 00:09:58.391 }' 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.391 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.962 [2024-12-07 02:43:09.791631] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.962 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.962 "name": "Existed_Raid", 00:09:58.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.962 "strip_size_kb": 64, 00:09:58.962 "state": "configuring", 00:09:58.962 "raid_level": "raid0", 00:09:58.962 "superblock": false, 00:09:58.962 "num_base_bdevs": 4, 00:09:58.962 "num_base_bdevs_discovered": 2, 00:09:58.962 "num_base_bdevs_operational": 4, 00:09:58.962 "base_bdevs_list": [ 00:09:58.962 { 00:09:58.962 "name": "BaseBdev1", 00:09:58.962 "uuid": "adff3157-9d72-4906-b496-79b36826f241", 00:09:58.962 "is_configured": true, 00:09:58.962 "data_offset": 0, 00:09:58.962 "data_size": 65536 00:09:58.962 }, 00:09:58.962 { 00:09:58.962 "name": null, 00:09:58.962 "uuid": "05fe03c2-46bb-4152-8782-2176b0f85237", 00:09:58.962 "is_configured": false, 00:09:58.962 "data_offset": 0, 00:09:58.962 "data_size": 65536 00:09:58.962 }, 00:09:58.962 { 00:09:58.962 "name": null, 00:09:58.962 "uuid": "0ec6e658-005d-406e-99d0-5e3b05043880", 00:09:58.962 "is_configured": false, 00:09:58.962 "data_offset": 0, 00:09:58.962 "data_size": 65536 00:09:58.962 }, 00:09:58.962 { 00:09:58.962 "name": "BaseBdev4", 00:09:58.962 "uuid": "5438c728-7e03-42c2-a8f4-4872531e723f", 00:09:58.963 "is_configured": true, 00:09:58.963 "data_offset": 0, 00:09:58.963 "data_size": 65536 00:09:58.963 } 00:09:58.963 ] 00:09:58.963 }' 00:09:58.963 02:43:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.963 02:43:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.223 [2024-12-07 02:43:10.290859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.223 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.483 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.483 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.483 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.483 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.483 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.483 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.483 "name": "Existed_Raid", 00:09:59.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.483 "strip_size_kb": 64, 00:09:59.483 "state": "configuring", 00:09:59.483 "raid_level": "raid0", 00:09:59.483 "superblock": false, 00:09:59.483 "num_base_bdevs": 4, 00:09:59.483 "num_base_bdevs_discovered": 3, 00:09:59.483 "num_base_bdevs_operational": 4, 00:09:59.484 "base_bdevs_list": [ 00:09:59.484 { 00:09:59.484 "name": "BaseBdev1", 00:09:59.484 "uuid": "adff3157-9d72-4906-b496-79b36826f241", 00:09:59.484 "is_configured": true, 00:09:59.484 "data_offset": 0, 00:09:59.484 "data_size": 65536 00:09:59.484 }, 00:09:59.484 { 00:09:59.484 "name": null, 00:09:59.484 "uuid": "05fe03c2-46bb-4152-8782-2176b0f85237", 00:09:59.484 "is_configured": false, 00:09:59.484 "data_offset": 0, 00:09:59.484 "data_size": 65536 00:09:59.484 }, 00:09:59.484 { 00:09:59.484 "name": "BaseBdev3", 00:09:59.484 "uuid": "0ec6e658-005d-406e-99d0-5e3b05043880", 00:09:59.484 "is_configured": true, 00:09:59.484 "data_offset": 0, 00:09:59.484 "data_size": 65536 00:09:59.484 }, 00:09:59.484 { 00:09:59.484 "name": "BaseBdev4", 00:09:59.484 "uuid": "5438c728-7e03-42c2-a8f4-4872531e723f", 00:09:59.484 "is_configured": true, 00:09:59.484 "data_offset": 0, 00:09:59.484 "data_size": 65536 00:09:59.484 } 00:09:59.484 ] 00:09:59.484 }' 00:09:59.484 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.484 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.744 [2024-12-07 02:43:10.766012] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.744 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.004 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.004 "name": "Existed_Raid", 00:10:00.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.004 "strip_size_kb": 64, 00:10:00.004 "state": "configuring", 00:10:00.004 "raid_level": "raid0", 00:10:00.004 "superblock": false, 00:10:00.004 "num_base_bdevs": 4, 00:10:00.004 "num_base_bdevs_discovered": 2, 00:10:00.004 "num_base_bdevs_operational": 4, 00:10:00.004 "base_bdevs_list": [ 00:10:00.004 { 00:10:00.004 "name": null, 00:10:00.004 "uuid": "adff3157-9d72-4906-b496-79b36826f241", 00:10:00.004 "is_configured": false, 00:10:00.004 "data_offset": 0, 00:10:00.004 "data_size": 65536 00:10:00.004 }, 00:10:00.004 { 00:10:00.004 "name": null, 00:10:00.004 "uuid": "05fe03c2-46bb-4152-8782-2176b0f85237", 00:10:00.004 "is_configured": false, 00:10:00.004 "data_offset": 0, 00:10:00.004 "data_size": 65536 00:10:00.004 }, 00:10:00.004 { 00:10:00.004 "name": "BaseBdev3", 00:10:00.004 "uuid": "0ec6e658-005d-406e-99d0-5e3b05043880", 00:10:00.004 "is_configured": true, 00:10:00.004 "data_offset": 0, 00:10:00.004 "data_size": 65536 00:10:00.004 }, 00:10:00.004 { 00:10:00.004 "name": "BaseBdev4", 00:10:00.004 "uuid": "5438c728-7e03-42c2-a8f4-4872531e723f", 00:10:00.004 "is_configured": true, 00:10:00.004 "data_offset": 0, 00:10:00.004 "data_size": 65536 00:10:00.004 } 00:10:00.004 ] 00:10:00.004 }' 00:10:00.004 02:43:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.004 02:43:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.263 [2024-12-07 02:43:11.240857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.263 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.263 "name": "Existed_Raid", 00:10:00.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.263 "strip_size_kb": 64, 00:10:00.263 "state": "configuring", 00:10:00.263 "raid_level": "raid0", 00:10:00.263 "superblock": false, 00:10:00.263 "num_base_bdevs": 4, 00:10:00.263 "num_base_bdevs_discovered": 3, 00:10:00.263 "num_base_bdevs_operational": 4, 00:10:00.264 "base_bdevs_list": [ 00:10:00.264 { 00:10:00.264 "name": null, 00:10:00.264 "uuid": "adff3157-9d72-4906-b496-79b36826f241", 00:10:00.264 "is_configured": false, 00:10:00.264 "data_offset": 0, 00:10:00.264 "data_size": 65536 00:10:00.264 }, 00:10:00.264 { 00:10:00.264 "name": "BaseBdev2", 00:10:00.264 "uuid": "05fe03c2-46bb-4152-8782-2176b0f85237", 00:10:00.264 "is_configured": true, 00:10:00.264 "data_offset": 0, 00:10:00.264 "data_size": 65536 00:10:00.264 }, 00:10:00.264 { 00:10:00.264 "name": "BaseBdev3", 00:10:00.264 "uuid": "0ec6e658-005d-406e-99d0-5e3b05043880", 00:10:00.264 "is_configured": true, 00:10:00.264 "data_offset": 0, 00:10:00.264 "data_size": 65536 00:10:00.264 }, 00:10:00.264 { 00:10:00.264 "name": "BaseBdev4", 00:10:00.264 "uuid": "5438c728-7e03-42c2-a8f4-4872531e723f", 00:10:00.264 "is_configured": true, 00:10:00.264 "data_offset": 0, 00:10:00.264 "data_size": 65536 00:10:00.264 } 00:10:00.264 ] 00:10:00.264 }' 00:10:00.264 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.264 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u adff3157-9d72-4906-b496-79b36826f241 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.833 [2024-12-07 02:43:11.824672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:00.833 [2024-12-07 02:43:11.824768] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:00.833 [2024-12-07 02:43:11.824776] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:00.833 [2024-12-07 02:43:11.825079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:00.833 [2024-12-07 02:43:11.825224] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:00.833 [2024-12-07 02:43:11.825244] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:00.833 [2024-12-07 02:43:11.825438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.833 NewBaseBdev 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.833 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.833 [ 00:10:00.833 { 00:10:00.833 "name": "NewBaseBdev", 00:10:00.834 "aliases": [ 00:10:00.834 "adff3157-9d72-4906-b496-79b36826f241" 00:10:00.834 ], 00:10:00.834 "product_name": "Malloc disk", 00:10:00.834 "block_size": 512, 00:10:00.834 "num_blocks": 65536, 00:10:00.834 "uuid": "adff3157-9d72-4906-b496-79b36826f241", 00:10:00.834 "assigned_rate_limits": { 00:10:00.834 "rw_ios_per_sec": 0, 00:10:00.834 "rw_mbytes_per_sec": 0, 00:10:00.834 "r_mbytes_per_sec": 0, 00:10:00.834 "w_mbytes_per_sec": 0 00:10:00.834 }, 00:10:00.834 "claimed": true, 00:10:00.834 "claim_type": "exclusive_write", 00:10:00.834 "zoned": false, 00:10:00.834 "supported_io_types": { 00:10:00.834 "read": true, 00:10:00.834 "write": true, 00:10:00.834 "unmap": true, 00:10:00.834 "flush": true, 00:10:00.834 "reset": true, 00:10:00.834 "nvme_admin": false, 00:10:00.834 "nvme_io": false, 00:10:00.834 "nvme_io_md": false, 00:10:00.834 "write_zeroes": true, 00:10:00.834 "zcopy": true, 00:10:00.834 "get_zone_info": false, 00:10:00.834 "zone_management": false, 00:10:00.834 "zone_append": false, 00:10:00.834 "compare": false, 00:10:00.834 "compare_and_write": false, 00:10:00.834 "abort": true, 00:10:00.834 "seek_hole": false, 00:10:00.834 "seek_data": false, 00:10:00.834 "copy": true, 00:10:00.834 "nvme_iov_md": false 00:10:00.834 }, 00:10:00.834 "memory_domains": [ 00:10:00.834 { 00:10:00.834 "dma_device_id": "system", 00:10:00.834 "dma_device_type": 1 00:10:00.834 }, 00:10:00.834 { 00:10:00.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.834 "dma_device_type": 2 00:10:00.834 } 00:10:00.834 ], 00:10:00.834 "driver_specific": {} 00:10:00.834 } 00:10:00.834 ] 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.834 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.095 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.095 "name": "Existed_Raid", 00:10:01.095 "uuid": "20d35367-44b8-45db-95ca-0977e5338524", 00:10:01.095 "strip_size_kb": 64, 00:10:01.095 "state": "online", 00:10:01.095 "raid_level": "raid0", 00:10:01.095 "superblock": false, 00:10:01.095 "num_base_bdevs": 4, 00:10:01.095 "num_base_bdevs_discovered": 4, 00:10:01.095 "num_base_bdevs_operational": 4, 00:10:01.095 "base_bdevs_list": [ 00:10:01.095 { 00:10:01.095 "name": "NewBaseBdev", 00:10:01.095 "uuid": "adff3157-9d72-4906-b496-79b36826f241", 00:10:01.095 "is_configured": true, 00:10:01.095 "data_offset": 0, 00:10:01.095 "data_size": 65536 00:10:01.095 }, 00:10:01.095 { 00:10:01.095 "name": "BaseBdev2", 00:10:01.095 "uuid": "05fe03c2-46bb-4152-8782-2176b0f85237", 00:10:01.095 "is_configured": true, 00:10:01.095 "data_offset": 0, 00:10:01.095 "data_size": 65536 00:10:01.095 }, 00:10:01.095 { 00:10:01.095 "name": "BaseBdev3", 00:10:01.095 "uuid": "0ec6e658-005d-406e-99d0-5e3b05043880", 00:10:01.095 "is_configured": true, 00:10:01.095 "data_offset": 0, 00:10:01.095 "data_size": 65536 00:10:01.095 }, 00:10:01.095 { 00:10:01.095 "name": "BaseBdev4", 00:10:01.095 "uuid": "5438c728-7e03-42c2-a8f4-4872531e723f", 00:10:01.095 "is_configured": true, 00:10:01.095 "data_offset": 0, 00:10:01.095 "data_size": 65536 00:10:01.095 } 00:10:01.095 ] 00:10:01.095 }' 00:10:01.095 02:43:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.095 02:43:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.356 [2024-12-07 02:43:12.264298] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.356 "name": "Existed_Raid", 00:10:01.356 "aliases": [ 00:10:01.356 "20d35367-44b8-45db-95ca-0977e5338524" 00:10:01.356 ], 00:10:01.356 "product_name": "Raid Volume", 00:10:01.356 "block_size": 512, 00:10:01.356 "num_blocks": 262144, 00:10:01.356 "uuid": "20d35367-44b8-45db-95ca-0977e5338524", 00:10:01.356 "assigned_rate_limits": { 00:10:01.356 "rw_ios_per_sec": 0, 00:10:01.356 "rw_mbytes_per_sec": 0, 00:10:01.356 "r_mbytes_per_sec": 0, 00:10:01.356 "w_mbytes_per_sec": 0 00:10:01.356 }, 00:10:01.356 "claimed": false, 00:10:01.356 "zoned": false, 00:10:01.356 "supported_io_types": { 00:10:01.356 "read": true, 00:10:01.356 "write": true, 00:10:01.356 "unmap": true, 00:10:01.356 "flush": true, 00:10:01.356 "reset": true, 00:10:01.356 "nvme_admin": false, 00:10:01.356 "nvme_io": false, 00:10:01.356 "nvme_io_md": false, 00:10:01.356 "write_zeroes": true, 00:10:01.356 "zcopy": false, 00:10:01.356 "get_zone_info": false, 00:10:01.356 "zone_management": false, 00:10:01.356 "zone_append": false, 00:10:01.356 "compare": false, 00:10:01.356 "compare_and_write": false, 00:10:01.356 "abort": false, 00:10:01.356 "seek_hole": false, 00:10:01.356 "seek_data": false, 00:10:01.356 "copy": false, 00:10:01.356 "nvme_iov_md": false 00:10:01.356 }, 00:10:01.356 "memory_domains": [ 00:10:01.356 { 00:10:01.356 "dma_device_id": "system", 00:10:01.356 "dma_device_type": 1 00:10:01.356 }, 00:10:01.356 { 00:10:01.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.356 "dma_device_type": 2 00:10:01.356 }, 00:10:01.356 { 00:10:01.356 "dma_device_id": "system", 00:10:01.356 "dma_device_type": 1 00:10:01.356 }, 00:10:01.356 { 00:10:01.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.356 "dma_device_type": 2 00:10:01.356 }, 00:10:01.356 { 00:10:01.356 "dma_device_id": "system", 00:10:01.356 "dma_device_type": 1 00:10:01.356 }, 00:10:01.356 { 00:10:01.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.356 "dma_device_type": 2 00:10:01.356 }, 00:10:01.356 { 00:10:01.356 "dma_device_id": "system", 00:10:01.356 "dma_device_type": 1 00:10:01.356 }, 00:10:01.356 { 00:10:01.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.356 "dma_device_type": 2 00:10:01.356 } 00:10:01.356 ], 00:10:01.356 "driver_specific": { 00:10:01.356 "raid": { 00:10:01.356 "uuid": "20d35367-44b8-45db-95ca-0977e5338524", 00:10:01.356 "strip_size_kb": 64, 00:10:01.356 "state": "online", 00:10:01.356 "raid_level": "raid0", 00:10:01.356 "superblock": false, 00:10:01.356 "num_base_bdevs": 4, 00:10:01.356 "num_base_bdevs_discovered": 4, 00:10:01.356 "num_base_bdevs_operational": 4, 00:10:01.356 "base_bdevs_list": [ 00:10:01.356 { 00:10:01.356 "name": "NewBaseBdev", 00:10:01.356 "uuid": "adff3157-9d72-4906-b496-79b36826f241", 00:10:01.356 "is_configured": true, 00:10:01.356 "data_offset": 0, 00:10:01.356 "data_size": 65536 00:10:01.356 }, 00:10:01.356 { 00:10:01.356 "name": "BaseBdev2", 00:10:01.356 "uuid": "05fe03c2-46bb-4152-8782-2176b0f85237", 00:10:01.356 "is_configured": true, 00:10:01.356 "data_offset": 0, 00:10:01.356 "data_size": 65536 00:10:01.356 }, 00:10:01.356 { 00:10:01.356 "name": "BaseBdev3", 00:10:01.356 "uuid": "0ec6e658-005d-406e-99d0-5e3b05043880", 00:10:01.356 "is_configured": true, 00:10:01.356 "data_offset": 0, 00:10:01.356 "data_size": 65536 00:10:01.356 }, 00:10:01.356 { 00:10:01.356 "name": "BaseBdev4", 00:10:01.356 "uuid": "5438c728-7e03-42c2-a8f4-4872531e723f", 00:10:01.356 "is_configured": true, 00:10:01.356 "data_offset": 0, 00:10:01.356 "data_size": 65536 00:10:01.356 } 00:10:01.356 ] 00:10:01.356 } 00:10:01.356 } 00:10:01.356 }' 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:01.356 BaseBdev2 00:10:01.356 BaseBdev3 00:10:01.356 BaseBdev4' 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.356 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.357 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.357 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.617 [2024-12-07 02:43:12.567471] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.617 [2024-12-07 02:43:12.567512] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.617 [2024-12-07 02:43:12.567603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.617 [2024-12-07 02:43:12.567684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.617 [2024-12-07 02:43:12.567703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80583 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80583 ']' 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80583 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80583 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:01.617 killing process with pid 80583 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80583' 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80583 00:10:01.617 [2024-12-07 02:43:12.614132] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.617 02:43:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80583 00:10:01.617 [2024-12-07 02:43:12.691807] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:02.188 00:10:02.188 real 0m9.732s 00:10:02.188 user 0m16.371s 00:10:02.188 sys 0m2.075s 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.188 ************************************ 00:10:02.188 END TEST raid_state_function_test 00:10:02.188 ************************************ 00:10:02.188 02:43:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:02.188 02:43:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:02.188 02:43:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.188 02:43:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.188 ************************************ 00:10:02.188 START TEST raid_state_function_test_sb 00:10:02.188 ************************************ 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:02.188 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:02.189 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:02.189 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:02.189 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=81232 00:10:02.189 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:02.189 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81232' 00:10:02.189 Process raid pid: 81232 00:10:02.189 02:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 81232 00:10:02.189 02:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 81232 ']' 00:10:02.189 02:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.189 02:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.189 02:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.189 02:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.189 02:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.189 [2024-12-07 02:43:13.231505] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:02.189 [2024-12-07 02:43:13.231655] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.449 [2024-12-07 02:43:13.391355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.449 [2024-12-07 02:43:13.461955] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.710 [2024-12-07 02:43:13.538033] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.710 [2024-12-07 02:43:13.538074] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.281 [2024-12-07 02:43:14.053272] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.281 [2024-12-07 02:43:14.053323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.281 [2024-12-07 02:43:14.053344] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.281 [2024-12-07 02:43:14.053355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.281 [2024-12-07 02:43:14.053361] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.281 [2024-12-07 02:43:14.053373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.281 [2024-12-07 02:43:14.053379] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:03.281 [2024-12-07 02:43:14.053390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.281 "name": "Existed_Raid", 00:10:03.281 "uuid": "8c861bba-1afd-43d5-9415-dc5001e9eba0", 00:10:03.281 "strip_size_kb": 64, 00:10:03.281 "state": "configuring", 00:10:03.281 "raid_level": "raid0", 00:10:03.281 "superblock": true, 00:10:03.281 "num_base_bdevs": 4, 00:10:03.281 "num_base_bdevs_discovered": 0, 00:10:03.281 "num_base_bdevs_operational": 4, 00:10:03.281 "base_bdevs_list": [ 00:10:03.281 { 00:10:03.281 "name": "BaseBdev1", 00:10:03.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.281 "is_configured": false, 00:10:03.281 "data_offset": 0, 00:10:03.281 "data_size": 0 00:10:03.281 }, 00:10:03.281 { 00:10:03.281 "name": "BaseBdev2", 00:10:03.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.281 "is_configured": false, 00:10:03.281 "data_offset": 0, 00:10:03.281 "data_size": 0 00:10:03.281 }, 00:10:03.281 { 00:10:03.281 "name": "BaseBdev3", 00:10:03.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.281 "is_configured": false, 00:10:03.281 "data_offset": 0, 00:10:03.281 "data_size": 0 00:10:03.281 }, 00:10:03.281 { 00:10:03.281 "name": "BaseBdev4", 00:10:03.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.281 "is_configured": false, 00:10:03.281 "data_offset": 0, 00:10:03.281 "data_size": 0 00:10:03.281 } 00:10:03.281 ] 00:10:03.281 }' 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.281 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.545 [2024-12-07 02:43:14.488426] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.545 [2024-12-07 02:43:14.488477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.545 [2024-12-07 02:43:14.500454] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.545 [2024-12-07 02:43:14.500496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.545 [2024-12-07 02:43:14.500505] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.545 [2024-12-07 02:43:14.500515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.545 [2024-12-07 02:43:14.500521] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.545 [2024-12-07 02:43:14.500531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.545 [2024-12-07 02:43:14.500536] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:03.545 [2024-12-07 02:43:14.500546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.545 [2024-12-07 02:43:14.527658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.545 BaseBdev1 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.545 [ 00:10:03.545 { 00:10:03.545 "name": "BaseBdev1", 00:10:03.545 "aliases": [ 00:10:03.545 "079ba6e0-1fc5-48bc-b4dd-1c9e5170e262" 00:10:03.545 ], 00:10:03.545 "product_name": "Malloc disk", 00:10:03.545 "block_size": 512, 00:10:03.545 "num_blocks": 65536, 00:10:03.545 "uuid": "079ba6e0-1fc5-48bc-b4dd-1c9e5170e262", 00:10:03.545 "assigned_rate_limits": { 00:10:03.545 "rw_ios_per_sec": 0, 00:10:03.545 "rw_mbytes_per_sec": 0, 00:10:03.545 "r_mbytes_per_sec": 0, 00:10:03.545 "w_mbytes_per_sec": 0 00:10:03.545 }, 00:10:03.545 "claimed": true, 00:10:03.545 "claim_type": "exclusive_write", 00:10:03.545 "zoned": false, 00:10:03.545 "supported_io_types": { 00:10:03.545 "read": true, 00:10:03.545 "write": true, 00:10:03.545 "unmap": true, 00:10:03.545 "flush": true, 00:10:03.545 "reset": true, 00:10:03.545 "nvme_admin": false, 00:10:03.545 "nvme_io": false, 00:10:03.545 "nvme_io_md": false, 00:10:03.545 "write_zeroes": true, 00:10:03.545 "zcopy": true, 00:10:03.545 "get_zone_info": false, 00:10:03.545 "zone_management": false, 00:10:03.545 "zone_append": false, 00:10:03.545 "compare": false, 00:10:03.545 "compare_and_write": false, 00:10:03.545 "abort": true, 00:10:03.545 "seek_hole": false, 00:10:03.545 "seek_data": false, 00:10:03.545 "copy": true, 00:10:03.545 "nvme_iov_md": false 00:10:03.545 }, 00:10:03.545 "memory_domains": [ 00:10:03.545 { 00:10:03.545 "dma_device_id": "system", 00:10:03.545 "dma_device_type": 1 00:10:03.545 }, 00:10:03.545 { 00:10:03.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.545 "dma_device_type": 2 00:10:03.545 } 00:10:03.545 ], 00:10:03.545 "driver_specific": {} 00:10:03.545 } 00:10:03.545 ] 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:03.545 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.546 "name": "Existed_Raid", 00:10:03.546 "uuid": "9af1512a-b90c-47cd-90a3-ae6848760719", 00:10:03.546 "strip_size_kb": 64, 00:10:03.546 "state": "configuring", 00:10:03.546 "raid_level": "raid0", 00:10:03.546 "superblock": true, 00:10:03.546 "num_base_bdevs": 4, 00:10:03.546 "num_base_bdevs_discovered": 1, 00:10:03.546 "num_base_bdevs_operational": 4, 00:10:03.546 "base_bdevs_list": [ 00:10:03.546 { 00:10:03.546 "name": "BaseBdev1", 00:10:03.546 "uuid": "079ba6e0-1fc5-48bc-b4dd-1c9e5170e262", 00:10:03.546 "is_configured": true, 00:10:03.546 "data_offset": 2048, 00:10:03.546 "data_size": 63488 00:10:03.546 }, 00:10:03.546 { 00:10:03.546 "name": "BaseBdev2", 00:10:03.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.546 "is_configured": false, 00:10:03.546 "data_offset": 0, 00:10:03.546 "data_size": 0 00:10:03.546 }, 00:10:03.546 { 00:10:03.546 "name": "BaseBdev3", 00:10:03.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.546 "is_configured": false, 00:10:03.546 "data_offset": 0, 00:10:03.546 "data_size": 0 00:10:03.546 }, 00:10:03.546 { 00:10:03.546 "name": "BaseBdev4", 00:10:03.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.546 "is_configured": false, 00:10:03.546 "data_offset": 0, 00:10:03.546 "data_size": 0 00:10:03.546 } 00:10:03.546 ] 00:10:03.546 }' 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.546 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.135 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.136 [2024-12-07 02:43:14.970903] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.136 [2024-12-07 02:43:14.970957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.136 [2024-12-07 02:43:14.982945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.136 [2024-12-07 02:43:14.985080] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.136 [2024-12-07 02:43:14.985117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.136 [2024-12-07 02:43:14.985143] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:04.136 [2024-12-07 02:43:14.985151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:04.136 [2024-12-07 02:43:14.985157] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:04.136 [2024-12-07 02:43:14.985166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.136 02:43:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.136 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.136 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.136 "name": "Existed_Raid", 00:10:04.136 "uuid": "3c9d3754-9442-4588-8735-ba6419a9ca6b", 00:10:04.136 "strip_size_kb": 64, 00:10:04.136 "state": "configuring", 00:10:04.136 "raid_level": "raid0", 00:10:04.136 "superblock": true, 00:10:04.136 "num_base_bdevs": 4, 00:10:04.136 "num_base_bdevs_discovered": 1, 00:10:04.136 "num_base_bdevs_operational": 4, 00:10:04.136 "base_bdevs_list": [ 00:10:04.136 { 00:10:04.136 "name": "BaseBdev1", 00:10:04.136 "uuid": "079ba6e0-1fc5-48bc-b4dd-1c9e5170e262", 00:10:04.136 "is_configured": true, 00:10:04.136 "data_offset": 2048, 00:10:04.136 "data_size": 63488 00:10:04.136 }, 00:10:04.136 { 00:10:04.136 "name": "BaseBdev2", 00:10:04.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.136 "is_configured": false, 00:10:04.136 "data_offset": 0, 00:10:04.136 "data_size": 0 00:10:04.136 }, 00:10:04.136 { 00:10:04.136 "name": "BaseBdev3", 00:10:04.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.136 "is_configured": false, 00:10:04.136 "data_offset": 0, 00:10:04.136 "data_size": 0 00:10:04.136 }, 00:10:04.136 { 00:10:04.136 "name": "BaseBdev4", 00:10:04.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.136 "is_configured": false, 00:10:04.136 "data_offset": 0, 00:10:04.136 "data_size": 0 00:10:04.136 } 00:10:04.136 ] 00:10:04.136 }' 00:10:04.136 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.136 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.403 [2024-12-07 02:43:15.448139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.403 BaseBdev2 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.403 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.403 [ 00:10:04.403 { 00:10:04.403 "name": "BaseBdev2", 00:10:04.403 "aliases": [ 00:10:04.403 "35724ad1-e5e7-44f1-be3e-2f22bcdeb544" 00:10:04.403 ], 00:10:04.403 "product_name": "Malloc disk", 00:10:04.403 "block_size": 512, 00:10:04.403 "num_blocks": 65536, 00:10:04.403 "uuid": "35724ad1-e5e7-44f1-be3e-2f22bcdeb544", 00:10:04.403 "assigned_rate_limits": { 00:10:04.403 "rw_ios_per_sec": 0, 00:10:04.403 "rw_mbytes_per_sec": 0, 00:10:04.403 "r_mbytes_per_sec": 0, 00:10:04.403 "w_mbytes_per_sec": 0 00:10:04.403 }, 00:10:04.403 "claimed": true, 00:10:04.403 "claim_type": "exclusive_write", 00:10:04.403 "zoned": false, 00:10:04.663 "supported_io_types": { 00:10:04.663 "read": true, 00:10:04.663 "write": true, 00:10:04.663 "unmap": true, 00:10:04.663 "flush": true, 00:10:04.663 "reset": true, 00:10:04.663 "nvme_admin": false, 00:10:04.663 "nvme_io": false, 00:10:04.663 "nvme_io_md": false, 00:10:04.663 "write_zeroes": true, 00:10:04.663 "zcopy": true, 00:10:04.663 "get_zone_info": false, 00:10:04.663 "zone_management": false, 00:10:04.663 "zone_append": false, 00:10:04.663 "compare": false, 00:10:04.663 "compare_and_write": false, 00:10:04.663 "abort": true, 00:10:04.663 "seek_hole": false, 00:10:04.663 "seek_data": false, 00:10:04.663 "copy": true, 00:10:04.663 "nvme_iov_md": false 00:10:04.663 }, 00:10:04.663 "memory_domains": [ 00:10:04.663 { 00:10:04.663 "dma_device_id": "system", 00:10:04.663 "dma_device_type": 1 00:10:04.663 }, 00:10:04.663 { 00:10:04.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.663 "dma_device_type": 2 00:10:04.663 } 00:10:04.663 ], 00:10:04.663 "driver_specific": {} 00:10:04.663 } 00:10:04.663 ] 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.663 "name": "Existed_Raid", 00:10:04.663 "uuid": "3c9d3754-9442-4588-8735-ba6419a9ca6b", 00:10:04.663 "strip_size_kb": 64, 00:10:04.663 "state": "configuring", 00:10:04.663 "raid_level": "raid0", 00:10:04.663 "superblock": true, 00:10:04.663 "num_base_bdevs": 4, 00:10:04.663 "num_base_bdevs_discovered": 2, 00:10:04.663 "num_base_bdevs_operational": 4, 00:10:04.663 "base_bdevs_list": [ 00:10:04.663 { 00:10:04.663 "name": "BaseBdev1", 00:10:04.663 "uuid": "079ba6e0-1fc5-48bc-b4dd-1c9e5170e262", 00:10:04.663 "is_configured": true, 00:10:04.663 "data_offset": 2048, 00:10:04.663 "data_size": 63488 00:10:04.663 }, 00:10:04.663 { 00:10:04.663 "name": "BaseBdev2", 00:10:04.663 "uuid": "35724ad1-e5e7-44f1-be3e-2f22bcdeb544", 00:10:04.663 "is_configured": true, 00:10:04.663 "data_offset": 2048, 00:10:04.663 "data_size": 63488 00:10:04.663 }, 00:10:04.663 { 00:10:04.663 "name": "BaseBdev3", 00:10:04.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.663 "is_configured": false, 00:10:04.663 "data_offset": 0, 00:10:04.663 "data_size": 0 00:10:04.663 }, 00:10:04.663 { 00:10:04.663 "name": "BaseBdev4", 00:10:04.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.663 "is_configured": false, 00:10:04.663 "data_offset": 0, 00:10:04.663 "data_size": 0 00:10:04.663 } 00:10:04.663 ] 00:10:04.663 }' 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.663 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.923 [2024-12-07 02:43:15.952123] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.923 BaseBdev3 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.923 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.923 [ 00:10:04.923 { 00:10:04.923 "name": "BaseBdev3", 00:10:04.923 "aliases": [ 00:10:04.923 "c0f8fedf-6583-4bba-ad04-dc686b8a8dbf" 00:10:04.923 ], 00:10:04.923 "product_name": "Malloc disk", 00:10:04.923 "block_size": 512, 00:10:04.923 "num_blocks": 65536, 00:10:04.923 "uuid": "c0f8fedf-6583-4bba-ad04-dc686b8a8dbf", 00:10:04.923 "assigned_rate_limits": { 00:10:04.923 "rw_ios_per_sec": 0, 00:10:04.923 "rw_mbytes_per_sec": 0, 00:10:04.923 "r_mbytes_per_sec": 0, 00:10:04.923 "w_mbytes_per_sec": 0 00:10:04.923 }, 00:10:04.923 "claimed": true, 00:10:04.923 "claim_type": "exclusive_write", 00:10:04.923 "zoned": false, 00:10:04.923 "supported_io_types": { 00:10:04.923 "read": true, 00:10:04.923 "write": true, 00:10:04.923 "unmap": true, 00:10:04.923 "flush": true, 00:10:04.923 "reset": true, 00:10:04.923 "nvme_admin": false, 00:10:04.923 "nvme_io": false, 00:10:04.923 "nvme_io_md": false, 00:10:04.923 "write_zeroes": true, 00:10:04.923 "zcopy": true, 00:10:04.923 "get_zone_info": false, 00:10:04.923 "zone_management": false, 00:10:04.923 "zone_append": false, 00:10:04.923 "compare": false, 00:10:04.923 "compare_and_write": false, 00:10:04.923 "abort": true, 00:10:04.923 "seek_hole": false, 00:10:04.923 "seek_data": false, 00:10:04.923 "copy": true, 00:10:04.924 "nvme_iov_md": false 00:10:04.924 }, 00:10:04.924 "memory_domains": [ 00:10:04.924 { 00:10:04.924 "dma_device_id": "system", 00:10:04.924 "dma_device_type": 1 00:10:04.924 }, 00:10:04.924 { 00:10:04.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.924 "dma_device_type": 2 00:10:04.924 } 00:10:04.924 ], 00:10:04.924 "driver_specific": {} 00:10:04.924 } 00:10:04.924 ] 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.924 02:43:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.183 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.183 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.183 "name": "Existed_Raid", 00:10:05.183 "uuid": "3c9d3754-9442-4588-8735-ba6419a9ca6b", 00:10:05.183 "strip_size_kb": 64, 00:10:05.183 "state": "configuring", 00:10:05.183 "raid_level": "raid0", 00:10:05.183 "superblock": true, 00:10:05.183 "num_base_bdevs": 4, 00:10:05.183 "num_base_bdevs_discovered": 3, 00:10:05.183 "num_base_bdevs_operational": 4, 00:10:05.183 "base_bdevs_list": [ 00:10:05.183 { 00:10:05.183 "name": "BaseBdev1", 00:10:05.183 "uuid": "079ba6e0-1fc5-48bc-b4dd-1c9e5170e262", 00:10:05.183 "is_configured": true, 00:10:05.183 "data_offset": 2048, 00:10:05.183 "data_size": 63488 00:10:05.183 }, 00:10:05.183 { 00:10:05.183 "name": "BaseBdev2", 00:10:05.183 "uuid": "35724ad1-e5e7-44f1-be3e-2f22bcdeb544", 00:10:05.183 "is_configured": true, 00:10:05.183 "data_offset": 2048, 00:10:05.183 "data_size": 63488 00:10:05.183 }, 00:10:05.183 { 00:10:05.183 "name": "BaseBdev3", 00:10:05.183 "uuid": "c0f8fedf-6583-4bba-ad04-dc686b8a8dbf", 00:10:05.183 "is_configured": true, 00:10:05.183 "data_offset": 2048, 00:10:05.183 "data_size": 63488 00:10:05.183 }, 00:10:05.183 { 00:10:05.183 "name": "BaseBdev4", 00:10:05.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.183 "is_configured": false, 00:10:05.183 "data_offset": 0, 00:10:05.183 "data_size": 0 00:10:05.183 } 00:10:05.183 ] 00:10:05.183 }' 00:10:05.183 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.183 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.443 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:05.443 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.443 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.443 [2024-12-07 02:43:16.432146] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:05.443 [2024-12-07 02:43:16.432382] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:05.443 [2024-12-07 02:43:16.432406] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:05.443 [2024-12-07 02:43:16.432769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:05.443 BaseBdev4 00:10:05.444 [2024-12-07 02:43:16.432927] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:05.444 [2024-12-07 02:43:16.432953] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:05.444 [2024-12-07 02:43:16.433092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.444 [ 00:10:05.444 { 00:10:05.444 "name": "BaseBdev4", 00:10:05.444 "aliases": [ 00:10:05.444 "5f91c479-d23e-4854-9c40-1d8cecc3ee48" 00:10:05.444 ], 00:10:05.444 "product_name": "Malloc disk", 00:10:05.444 "block_size": 512, 00:10:05.444 "num_blocks": 65536, 00:10:05.444 "uuid": "5f91c479-d23e-4854-9c40-1d8cecc3ee48", 00:10:05.444 "assigned_rate_limits": { 00:10:05.444 "rw_ios_per_sec": 0, 00:10:05.444 "rw_mbytes_per_sec": 0, 00:10:05.444 "r_mbytes_per_sec": 0, 00:10:05.444 "w_mbytes_per_sec": 0 00:10:05.444 }, 00:10:05.444 "claimed": true, 00:10:05.444 "claim_type": "exclusive_write", 00:10:05.444 "zoned": false, 00:10:05.444 "supported_io_types": { 00:10:05.444 "read": true, 00:10:05.444 "write": true, 00:10:05.444 "unmap": true, 00:10:05.444 "flush": true, 00:10:05.444 "reset": true, 00:10:05.444 "nvme_admin": false, 00:10:05.444 "nvme_io": false, 00:10:05.444 "nvme_io_md": false, 00:10:05.444 "write_zeroes": true, 00:10:05.444 "zcopy": true, 00:10:05.444 "get_zone_info": false, 00:10:05.444 "zone_management": false, 00:10:05.444 "zone_append": false, 00:10:05.444 "compare": false, 00:10:05.444 "compare_and_write": false, 00:10:05.444 "abort": true, 00:10:05.444 "seek_hole": false, 00:10:05.444 "seek_data": false, 00:10:05.444 "copy": true, 00:10:05.444 "nvme_iov_md": false 00:10:05.444 }, 00:10:05.444 "memory_domains": [ 00:10:05.444 { 00:10:05.444 "dma_device_id": "system", 00:10:05.444 "dma_device_type": 1 00:10:05.444 }, 00:10:05.444 { 00:10:05.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.444 "dma_device_type": 2 00:10:05.444 } 00:10:05.444 ], 00:10:05.444 "driver_specific": {} 00:10:05.444 } 00:10:05.444 ] 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.444 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.703 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.703 "name": "Existed_Raid", 00:10:05.703 "uuid": "3c9d3754-9442-4588-8735-ba6419a9ca6b", 00:10:05.703 "strip_size_kb": 64, 00:10:05.703 "state": "online", 00:10:05.703 "raid_level": "raid0", 00:10:05.703 "superblock": true, 00:10:05.703 "num_base_bdevs": 4, 00:10:05.703 "num_base_bdevs_discovered": 4, 00:10:05.703 "num_base_bdevs_operational": 4, 00:10:05.703 "base_bdevs_list": [ 00:10:05.703 { 00:10:05.703 "name": "BaseBdev1", 00:10:05.703 "uuid": "079ba6e0-1fc5-48bc-b4dd-1c9e5170e262", 00:10:05.703 "is_configured": true, 00:10:05.703 "data_offset": 2048, 00:10:05.703 "data_size": 63488 00:10:05.703 }, 00:10:05.703 { 00:10:05.703 "name": "BaseBdev2", 00:10:05.703 "uuid": "35724ad1-e5e7-44f1-be3e-2f22bcdeb544", 00:10:05.703 "is_configured": true, 00:10:05.703 "data_offset": 2048, 00:10:05.703 "data_size": 63488 00:10:05.703 }, 00:10:05.703 { 00:10:05.703 "name": "BaseBdev3", 00:10:05.703 "uuid": "c0f8fedf-6583-4bba-ad04-dc686b8a8dbf", 00:10:05.703 "is_configured": true, 00:10:05.703 "data_offset": 2048, 00:10:05.703 "data_size": 63488 00:10:05.703 }, 00:10:05.703 { 00:10:05.703 "name": "BaseBdev4", 00:10:05.703 "uuid": "5f91c479-d23e-4854-9c40-1d8cecc3ee48", 00:10:05.703 "is_configured": true, 00:10:05.703 "data_offset": 2048, 00:10:05.703 "data_size": 63488 00:10:05.703 } 00:10:05.703 ] 00:10:05.703 }' 00:10:05.703 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.703 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.962 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:05.962 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:05.962 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.962 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.962 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.962 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.962 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.962 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:05.962 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.962 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.962 [2024-12-07 02:43:16.883815] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.962 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.962 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.962 "name": "Existed_Raid", 00:10:05.962 "aliases": [ 00:10:05.962 "3c9d3754-9442-4588-8735-ba6419a9ca6b" 00:10:05.962 ], 00:10:05.962 "product_name": "Raid Volume", 00:10:05.962 "block_size": 512, 00:10:05.962 "num_blocks": 253952, 00:10:05.962 "uuid": "3c9d3754-9442-4588-8735-ba6419a9ca6b", 00:10:05.962 "assigned_rate_limits": { 00:10:05.962 "rw_ios_per_sec": 0, 00:10:05.962 "rw_mbytes_per_sec": 0, 00:10:05.962 "r_mbytes_per_sec": 0, 00:10:05.962 "w_mbytes_per_sec": 0 00:10:05.962 }, 00:10:05.962 "claimed": false, 00:10:05.962 "zoned": false, 00:10:05.962 "supported_io_types": { 00:10:05.962 "read": true, 00:10:05.962 "write": true, 00:10:05.962 "unmap": true, 00:10:05.962 "flush": true, 00:10:05.962 "reset": true, 00:10:05.962 "nvme_admin": false, 00:10:05.962 "nvme_io": false, 00:10:05.962 "nvme_io_md": false, 00:10:05.962 "write_zeroes": true, 00:10:05.962 "zcopy": false, 00:10:05.962 "get_zone_info": false, 00:10:05.962 "zone_management": false, 00:10:05.962 "zone_append": false, 00:10:05.962 "compare": false, 00:10:05.962 "compare_and_write": false, 00:10:05.962 "abort": false, 00:10:05.962 "seek_hole": false, 00:10:05.962 "seek_data": false, 00:10:05.962 "copy": false, 00:10:05.962 "nvme_iov_md": false 00:10:05.962 }, 00:10:05.962 "memory_domains": [ 00:10:05.962 { 00:10:05.962 "dma_device_id": "system", 00:10:05.962 "dma_device_type": 1 00:10:05.962 }, 00:10:05.962 { 00:10:05.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.962 "dma_device_type": 2 00:10:05.962 }, 00:10:05.962 { 00:10:05.962 "dma_device_id": "system", 00:10:05.962 "dma_device_type": 1 00:10:05.962 }, 00:10:05.962 { 00:10:05.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.962 "dma_device_type": 2 00:10:05.962 }, 00:10:05.962 { 00:10:05.962 "dma_device_id": "system", 00:10:05.962 "dma_device_type": 1 00:10:05.962 }, 00:10:05.962 { 00:10:05.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.962 "dma_device_type": 2 00:10:05.962 }, 00:10:05.962 { 00:10:05.962 "dma_device_id": "system", 00:10:05.962 "dma_device_type": 1 00:10:05.962 }, 00:10:05.962 { 00:10:05.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.962 "dma_device_type": 2 00:10:05.962 } 00:10:05.962 ], 00:10:05.962 "driver_specific": { 00:10:05.962 "raid": { 00:10:05.962 "uuid": "3c9d3754-9442-4588-8735-ba6419a9ca6b", 00:10:05.962 "strip_size_kb": 64, 00:10:05.962 "state": "online", 00:10:05.962 "raid_level": "raid0", 00:10:05.962 "superblock": true, 00:10:05.962 "num_base_bdevs": 4, 00:10:05.962 "num_base_bdevs_discovered": 4, 00:10:05.962 "num_base_bdevs_operational": 4, 00:10:05.962 "base_bdevs_list": [ 00:10:05.962 { 00:10:05.962 "name": "BaseBdev1", 00:10:05.962 "uuid": "079ba6e0-1fc5-48bc-b4dd-1c9e5170e262", 00:10:05.962 "is_configured": true, 00:10:05.962 "data_offset": 2048, 00:10:05.962 "data_size": 63488 00:10:05.962 }, 00:10:05.962 { 00:10:05.962 "name": "BaseBdev2", 00:10:05.962 "uuid": "35724ad1-e5e7-44f1-be3e-2f22bcdeb544", 00:10:05.962 "is_configured": true, 00:10:05.962 "data_offset": 2048, 00:10:05.962 "data_size": 63488 00:10:05.962 }, 00:10:05.962 { 00:10:05.962 "name": "BaseBdev3", 00:10:05.962 "uuid": "c0f8fedf-6583-4bba-ad04-dc686b8a8dbf", 00:10:05.962 "is_configured": true, 00:10:05.962 "data_offset": 2048, 00:10:05.962 "data_size": 63488 00:10:05.962 }, 00:10:05.962 { 00:10:05.962 "name": "BaseBdev4", 00:10:05.962 "uuid": "5f91c479-d23e-4854-9c40-1d8cecc3ee48", 00:10:05.962 "is_configured": true, 00:10:05.962 "data_offset": 2048, 00:10:05.962 "data_size": 63488 00:10:05.962 } 00:10:05.962 ] 00:10:05.962 } 00:10:05.962 } 00:10:05.962 }' 00:10:05.962 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.963 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:05.963 BaseBdev2 00:10:05.963 BaseBdev3 00:10:05.963 BaseBdev4' 00:10:05.963 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.963 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.963 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.963 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:05.963 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.963 02:43:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.963 02:43:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.963 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.222 [2024-12-07 02:43:17.198935] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:06.222 [2024-12-07 02:43:17.198971] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.222 [2024-12-07 02:43:17.199040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.222 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.223 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.223 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.223 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.223 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.223 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.223 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.223 "name": "Existed_Raid", 00:10:06.223 "uuid": "3c9d3754-9442-4588-8735-ba6419a9ca6b", 00:10:06.223 "strip_size_kb": 64, 00:10:06.223 "state": "offline", 00:10:06.223 "raid_level": "raid0", 00:10:06.223 "superblock": true, 00:10:06.223 "num_base_bdevs": 4, 00:10:06.223 "num_base_bdevs_discovered": 3, 00:10:06.223 "num_base_bdevs_operational": 3, 00:10:06.223 "base_bdevs_list": [ 00:10:06.223 { 00:10:06.223 "name": null, 00:10:06.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.223 "is_configured": false, 00:10:06.223 "data_offset": 0, 00:10:06.223 "data_size": 63488 00:10:06.223 }, 00:10:06.223 { 00:10:06.223 "name": "BaseBdev2", 00:10:06.223 "uuid": "35724ad1-e5e7-44f1-be3e-2f22bcdeb544", 00:10:06.223 "is_configured": true, 00:10:06.223 "data_offset": 2048, 00:10:06.223 "data_size": 63488 00:10:06.223 }, 00:10:06.223 { 00:10:06.223 "name": "BaseBdev3", 00:10:06.223 "uuid": "c0f8fedf-6583-4bba-ad04-dc686b8a8dbf", 00:10:06.223 "is_configured": true, 00:10:06.223 "data_offset": 2048, 00:10:06.223 "data_size": 63488 00:10:06.223 }, 00:10:06.223 { 00:10:06.223 "name": "BaseBdev4", 00:10:06.223 "uuid": "5f91c479-d23e-4854-9c40-1d8cecc3ee48", 00:10:06.223 "is_configured": true, 00:10:06.223 "data_offset": 2048, 00:10:06.223 "data_size": 63488 00:10:06.223 } 00:10:06.223 ] 00:10:06.223 }' 00:10:06.223 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.223 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.792 [2024-12-07 02:43:17.710197] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.792 [2024-12-07 02:43:17.786543] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.792 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.793 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:06.793 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.793 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.793 [2024-12-07 02:43:17.854950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:06.793 [2024-12-07 02:43:17.855004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.053 BaseBdev2 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.053 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.054 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:07.054 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.054 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.054 [ 00:10:07.054 { 00:10:07.054 "name": "BaseBdev2", 00:10:07.054 "aliases": [ 00:10:07.054 "557fbcdc-6121-4c74-8518-8cadd49dc586" 00:10:07.054 ], 00:10:07.054 "product_name": "Malloc disk", 00:10:07.054 "block_size": 512, 00:10:07.054 "num_blocks": 65536, 00:10:07.054 "uuid": "557fbcdc-6121-4c74-8518-8cadd49dc586", 00:10:07.054 "assigned_rate_limits": { 00:10:07.054 "rw_ios_per_sec": 0, 00:10:07.054 "rw_mbytes_per_sec": 0, 00:10:07.054 "r_mbytes_per_sec": 0, 00:10:07.054 "w_mbytes_per_sec": 0 00:10:07.054 }, 00:10:07.054 "claimed": false, 00:10:07.054 "zoned": false, 00:10:07.054 "supported_io_types": { 00:10:07.054 "read": true, 00:10:07.054 "write": true, 00:10:07.054 "unmap": true, 00:10:07.054 "flush": true, 00:10:07.054 "reset": true, 00:10:07.054 "nvme_admin": false, 00:10:07.054 "nvme_io": false, 00:10:07.054 "nvme_io_md": false, 00:10:07.054 "write_zeroes": true, 00:10:07.054 "zcopy": true, 00:10:07.054 "get_zone_info": false, 00:10:07.054 "zone_management": false, 00:10:07.054 "zone_append": false, 00:10:07.054 "compare": false, 00:10:07.054 "compare_and_write": false, 00:10:07.054 "abort": true, 00:10:07.054 "seek_hole": false, 00:10:07.054 "seek_data": false, 00:10:07.054 "copy": true, 00:10:07.054 "nvme_iov_md": false 00:10:07.054 }, 00:10:07.054 "memory_domains": [ 00:10:07.054 { 00:10:07.054 "dma_device_id": "system", 00:10:07.054 "dma_device_type": 1 00:10:07.054 }, 00:10:07.054 { 00:10:07.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.054 "dma_device_type": 2 00:10:07.054 } 00:10:07.054 ], 00:10:07.054 "driver_specific": {} 00:10:07.054 } 00:10:07.054 ] 00:10:07.054 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.054 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:07.054 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:07.054 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.054 02:43:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.054 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.054 02:43:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.054 BaseBdev3 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.054 [ 00:10:07.054 { 00:10:07.054 "name": "BaseBdev3", 00:10:07.054 "aliases": [ 00:10:07.054 "909a635a-fcdc-46b0-a664-ac2a36e7d2b0" 00:10:07.054 ], 00:10:07.054 "product_name": "Malloc disk", 00:10:07.054 "block_size": 512, 00:10:07.054 "num_blocks": 65536, 00:10:07.054 "uuid": "909a635a-fcdc-46b0-a664-ac2a36e7d2b0", 00:10:07.054 "assigned_rate_limits": { 00:10:07.054 "rw_ios_per_sec": 0, 00:10:07.054 "rw_mbytes_per_sec": 0, 00:10:07.054 "r_mbytes_per_sec": 0, 00:10:07.054 "w_mbytes_per_sec": 0 00:10:07.054 }, 00:10:07.054 "claimed": false, 00:10:07.054 "zoned": false, 00:10:07.054 "supported_io_types": { 00:10:07.054 "read": true, 00:10:07.054 "write": true, 00:10:07.054 "unmap": true, 00:10:07.054 "flush": true, 00:10:07.054 "reset": true, 00:10:07.054 "nvme_admin": false, 00:10:07.054 "nvme_io": false, 00:10:07.054 "nvme_io_md": false, 00:10:07.054 "write_zeroes": true, 00:10:07.054 "zcopy": true, 00:10:07.054 "get_zone_info": false, 00:10:07.054 "zone_management": false, 00:10:07.054 "zone_append": false, 00:10:07.054 "compare": false, 00:10:07.054 "compare_and_write": false, 00:10:07.054 "abort": true, 00:10:07.054 "seek_hole": false, 00:10:07.054 "seek_data": false, 00:10:07.054 "copy": true, 00:10:07.054 "nvme_iov_md": false 00:10:07.054 }, 00:10:07.054 "memory_domains": [ 00:10:07.054 { 00:10:07.054 "dma_device_id": "system", 00:10:07.054 "dma_device_type": 1 00:10:07.054 }, 00:10:07.054 { 00:10:07.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.054 "dma_device_type": 2 00:10:07.054 } 00:10:07.054 ], 00:10:07.054 "driver_specific": {} 00:10:07.054 } 00:10:07.054 ] 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.054 BaseBdev4 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.054 [ 00:10:07.054 { 00:10:07.054 "name": "BaseBdev4", 00:10:07.054 "aliases": [ 00:10:07.054 "84cff490-da8a-46c6-b42b-8d987e42b23c" 00:10:07.054 ], 00:10:07.054 "product_name": "Malloc disk", 00:10:07.054 "block_size": 512, 00:10:07.054 "num_blocks": 65536, 00:10:07.054 "uuid": "84cff490-da8a-46c6-b42b-8d987e42b23c", 00:10:07.054 "assigned_rate_limits": { 00:10:07.054 "rw_ios_per_sec": 0, 00:10:07.054 "rw_mbytes_per_sec": 0, 00:10:07.054 "r_mbytes_per_sec": 0, 00:10:07.054 "w_mbytes_per_sec": 0 00:10:07.054 }, 00:10:07.054 "claimed": false, 00:10:07.054 "zoned": false, 00:10:07.054 "supported_io_types": { 00:10:07.054 "read": true, 00:10:07.054 "write": true, 00:10:07.054 "unmap": true, 00:10:07.054 "flush": true, 00:10:07.054 "reset": true, 00:10:07.054 "nvme_admin": false, 00:10:07.054 "nvme_io": false, 00:10:07.054 "nvme_io_md": false, 00:10:07.054 "write_zeroes": true, 00:10:07.054 "zcopy": true, 00:10:07.054 "get_zone_info": false, 00:10:07.054 "zone_management": false, 00:10:07.054 "zone_append": false, 00:10:07.054 "compare": false, 00:10:07.054 "compare_and_write": false, 00:10:07.054 "abort": true, 00:10:07.054 "seek_hole": false, 00:10:07.054 "seek_data": false, 00:10:07.054 "copy": true, 00:10:07.054 "nvme_iov_md": false 00:10:07.054 }, 00:10:07.054 "memory_domains": [ 00:10:07.054 { 00:10:07.054 "dma_device_id": "system", 00:10:07.054 "dma_device_type": 1 00:10:07.054 }, 00:10:07.054 { 00:10:07.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.054 "dma_device_type": 2 00:10:07.054 } 00:10:07.054 ], 00:10:07.054 "driver_specific": {} 00:10:07.054 } 00:10:07.054 ] 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:07.054 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.055 [2024-12-07 02:43:18.112113] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.055 [2024-12-07 02:43:18.112235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.055 [2024-12-07 02:43:18.112301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.055 [2024-12-07 02:43:18.114390] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.055 [2024-12-07 02:43:18.114476] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.055 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.313 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.313 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.313 "name": "Existed_Raid", 00:10:07.313 "uuid": "bb0b275a-c5df-4892-b4da-fff5ee9c2079", 00:10:07.313 "strip_size_kb": 64, 00:10:07.313 "state": "configuring", 00:10:07.313 "raid_level": "raid0", 00:10:07.313 "superblock": true, 00:10:07.313 "num_base_bdevs": 4, 00:10:07.313 "num_base_bdevs_discovered": 3, 00:10:07.313 "num_base_bdevs_operational": 4, 00:10:07.313 "base_bdevs_list": [ 00:10:07.313 { 00:10:07.313 "name": "BaseBdev1", 00:10:07.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.313 "is_configured": false, 00:10:07.313 "data_offset": 0, 00:10:07.313 "data_size": 0 00:10:07.313 }, 00:10:07.313 { 00:10:07.313 "name": "BaseBdev2", 00:10:07.313 "uuid": "557fbcdc-6121-4c74-8518-8cadd49dc586", 00:10:07.313 "is_configured": true, 00:10:07.313 "data_offset": 2048, 00:10:07.313 "data_size": 63488 00:10:07.313 }, 00:10:07.313 { 00:10:07.313 "name": "BaseBdev3", 00:10:07.313 "uuid": "909a635a-fcdc-46b0-a664-ac2a36e7d2b0", 00:10:07.313 "is_configured": true, 00:10:07.313 "data_offset": 2048, 00:10:07.313 "data_size": 63488 00:10:07.313 }, 00:10:07.313 { 00:10:07.313 "name": "BaseBdev4", 00:10:07.313 "uuid": "84cff490-da8a-46c6-b42b-8d987e42b23c", 00:10:07.313 "is_configured": true, 00:10:07.313 "data_offset": 2048, 00:10:07.313 "data_size": 63488 00:10:07.313 } 00:10:07.313 ] 00:10:07.313 }' 00:10:07.313 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.313 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.573 [2024-12-07 02:43:18.611301] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.573 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.832 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.832 "name": "Existed_Raid", 00:10:07.832 "uuid": "bb0b275a-c5df-4892-b4da-fff5ee9c2079", 00:10:07.832 "strip_size_kb": 64, 00:10:07.832 "state": "configuring", 00:10:07.832 "raid_level": "raid0", 00:10:07.832 "superblock": true, 00:10:07.832 "num_base_bdevs": 4, 00:10:07.832 "num_base_bdevs_discovered": 2, 00:10:07.832 "num_base_bdevs_operational": 4, 00:10:07.832 "base_bdevs_list": [ 00:10:07.832 { 00:10:07.832 "name": "BaseBdev1", 00:10:07.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.832 "is_configured": false, 00:10:07.832 "data_offset": 0, 00:10:07.832 "data_size": 0 00:10:07.832 }, 00:10:07.832 { 00:10:07.832 "name": null, 00:10:07.832 "uuid": "557fbcdc-6121-4c74-8518-8cadd49dc586", 00:10:07.832 "is_configured": false, 00:10:07.832 "data_offset": 0, 00:10:07.832 "data_size": 63488 00:10:07.832 }, 00:10:07.832 { 00:10:07.832 "name": "BaseBdev3", 00:10:07.832 "uuid": "909a635a-fcdc-46b0-a664-ac2a36e7d2b0", 00:10:07.832 "is_configured": true, 00:10:07.832 "data_offset": 2048, 00:10:07.832 "data_size": 63488 00:10:07.832 }, 00:10:07.832 { 00:10:07.832 "name": "BaseBdev4", 00:10:07.832 "uuid": "84cff490-da8a-46c6-b42b-8d987e42b23c", 00:10:07.832 "is_configured": true, 00:10:07.832 "data_offset": 2048, 00:10:07.832 "data_size": 63488 00:10:07.832 } 00:10:07.832 ] 00:10:07.832 }' 00:10:07.832 02:43:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.832 02:43:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.092 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.092 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.092 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.092 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:08.092 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.092 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:08.092 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:08.092 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.092 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.092 [2024-12-07 02:43:19.111327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.092 BaseBdev1 00:10:08.092 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.092 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.093 [ 00:10:08.093 { 00:10:08.093 "name": "BaseBdev1", 00:10:08.093 "aliases": [ 00:10:08.093 "a279f4c6-1f82-4a63-be38-884e63375d42" 00:10:08.093 ], 00:10:08.093 "product_name": "Malloc disk", 00:10:08.093 "block_size": 512, 00:10:08.093 "num_blocks": 65536, 00:10:08.093 "uuid": "a279f4c6-1f82-4a63-be38-884e63375d42", 00:10:08.093 "assigned_rate_limits": { 00:10:08.093 "rw_ios_per_sec": 0, 00:10:08.093 "rw_mbytes_per_sec": 0, 00:10:08.093 "r_mbytes_per_sec": 0, 00:10:08.093 "w_mbytes_per_sec": 0 00:10:08.093 }, 00:10:08.093 "claimed": true, 00:10:08.093 "claim_type": "exclusive_write", 00:10:08.093 "zoned": false, 00:10:08.093 "supported_io_types": { 00:10:08.093 "read": true, 00:10:08.093 "write": true, 00:10:08.093 "unmap": true, 00:10:08.093 "flush": true, 00:10:08.093 "reset": true, 00:10:08.093 "nvme_admin": false, 00:10:08.093 "nvme_io": false, 00:10:08.093 "nvme_io_md": false, 00:10:08.093 "write_zeroes": true, 00:10:08.093 "zcopy": true, 00:10:08.093 "get_zone_info": false, 00:10:08.093 "zone_management": false, 00:10:08.093 "zone_append": false, 00:10:08.093 "compare": false, 00:10:08.093 "compare_and_write": false, 00:10:08.093 "abort": true, 00:10:08.093 "seek_hole": false, 00:10:08.093 "seek_data": false, 00:10:08.093 "copy": true, 00:10:08.093 "nvme_iov_md": false 00:10:08.093 }, 00:10:08.093 "memory_domains": [ 00:10:08.093 { 00:10:08.093 "dma_device_id": "system", 00:10:08.093 "dma_device_type": 1 00:10:08.093 }, 00:10:08.093 { 00:10:08.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.093 "dma_device_type": 2 00:10:08.093 } 00:10:08.093 ], 00:10:08.093 "driver_specific": {} 00:10:08.093 } 00:10:08.093 ] 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.093 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.352 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.352 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.352 "name": "Existed_Raid", 00:10:08.352 "uuid": "bb0b275a-c5df-4892-b4da-fff5ee9c2079", 00:10:08.352 "strip_size_kb": 64, 00:10:08.352 "state": "configuring", 00:10:08.352 "raid_level": "raid0", 00:10:08.352 "superblock": true, 00:10:08.352 "num_base_bdevs": 4, 00:10:08.352 "num_base_bdevs_discovered": 3, 00:10:08.352 "num_base_bdevs_operational": 4, 00:10:08.352 "base_bdevs_list": [ 00:10:08.352 { 00:10:08.352 "name": "BaseBdev1", 00:10:08.352 "uuid": "a279f4c6-1f82-4a63-be38-884e63375d42", 00:10:08.352 "is_configured": true, 00:10:08.352 "data_offset": 2048, 00:10:08.352 "data_size": 63488 00:10:08.352 }, 00:10:08.352 { 00:10:08.352 "name": null, 00:10:08.352 "uuid": "557fbcdc-6121-4c74-8518-8cadd49dc586", 00:10:08.352 "is_configured": false, 00:10:08.352 "data_offset": 0, 00:10:08.352 "data_size": 63488 00:10:08.352 }, 00:10:08.352 { 00:10:08.352 "name": "BaseBdev3", 00:10:08.352 "uuid": "909a635a-fcdc-46b0-a664-ac2a36e7d2b0", 00:10:08.352 "is_configured": true, 00:10:08.352 "data_offset": 2048, 00:10:08.352 "data_size": 63488 00:10:08.352 }, 00:10:08.352 { 00:10:08.352 "name": "BaseBdev4", 00:10:08.352 "uuid": "84cff490-da8a-46c6-b42b-8d987e42b23c", 00:10:08.352 "is_configured": true, 00:10:08.352 "data_offset": 2048, 00:10:08.352 "data_size": 63488 00:10:08.352 } 00:10:08.352 ] 00:10:08.352 }' 00:10:08.352 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.352 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.612 [2024-12-07 02:43:19.654428] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.612 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.872 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.872 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.872 "name": "Existed_Raid", 00:10:08.872 "uuid": "bb0b275a-c5df-4892-b4da-fff5ee9c2079", 00:10:08.872 "strip_size_kb": 64, 00:10:08.872 "state": "configuring", 00:10:08.872 "raid_level": "raid0", 00:10:08.872 "superblock": true, 00:10:08.872 "num_base_bdevs": 4, 00:10:08.872 "num_base_bdevs_discovered": 2, 00:10:08.872 "num_base_bdevs_operational": 4, 00:10:08.872 "base_bdevs_list": [ 00:10:08.872 { 00:10:08.872 "name": "BaseBdev1", 00:10:08.872 "uuid": "a279f4c6-1f82-4a63-be38-884e63375d42", 00:10:08.872 "is_configured": true, 00:10:08.872 "data_offset": 2048, 00:10:08.872 "data_size": 63488 00:10:08.872 }, 00:10:08.872 { 00:10:08.872 "name": null, 00:10:08.872 "uuid": "557fbcdc-6121-4c74-8518-8cadd49dc586", 00:10:08.872 "is_configured": false, 00:10:08.872 "data_offset": 0, 00:10:08.872 "data_size": 63488 00:10:08.872 }, 00:10:08.872 { 00:10:08.872 "name": null, 00:10:08.872 "uuid": "909a635a-fcdc-46b0-a664-ac2a36e7d2b0", 00:10:08.872 "is_configured": false, 00:10:08.872 "data_offset": 0, 00:10:08.872 "data_size": 63488 00:10:08.872 }, 00:10:08.872 { 00:10:08.872 "name": "BaseBdev4", 00:10:08.872 "uuid": "84cff490-da8a-46c6-b42b-8d987e42b23c", 00:10:08.872 "is_configured": true, 00:10:08.872 "data_offset": 2048, 00:10:08.872 "data_size": 63488 00:10:08.872 } 00:10:08.872 ] 00:10:08.872 }' 00:10:08.872 02:43:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.872 02:43:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.131 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.131 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.131 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.131 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.131 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.131 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:09.131 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:09.131 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.131 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.390 [2024-12-07 02:43:20.209523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.390 "name": "Existed_Raid", 00:10:09.390 "uuid": "bb0b275a-c5df-4892-b4da-fff5ee9c2079", 00:10:09.390 "strip_size_kb": 64, 00:10:09.390 "state": "configuring", 00:10:09.390 "raid_level": "raid0", 00:10:09.390 "superblock": true, 00:10:09.390 "num_base_bdevs": 4, 00:10:09.390 "num_base_bdevs_discovered": 3, 00:10:09.390 "num_base_bdevs_operational": 4, 00:10:09.390 "base_bdevs_list": [ 00:10:09.390 { 00:10:09.390 "name": "BaseBdev1", 00:10:09.390 "uuid": "a279f4c6-1f82-4a63-be38-884e63375d42", 00:10:09.390 "is_configured": true, 00:10:09.390 "data_offset": 2048, 00:10:09.390 "data_size": 63488 00:10:09.390 }, 00:10:09.390 { 00:10:09.390 "name": null, 00:10:09.390 "uuid": "557fbcdc-6121-4c74-8518-8cadd49dc586", 00:10:09.390 "is_configured": false, 00:10:09.390 "data_offset": 0, 00:10:09.390 "data_size": 63488 00:10:09.390 }, 00:10:09.390 { 00:10:09.390 "name": "BaseBdev3", 00:10:09.390 "uuid": "909a635a-fcdc-46b0-a664-ac2a36e7d2b0", 00:10:09.390 "is_configured": true, 00:10:09.390 "data_offset": 2048, 00:10:09.390 "data_size": 63488 00:10:09.390 }, 00:10:09.390 { 00:10:09.390 "name": "BaseBdev4", 00:10:09.390 "uuid": "84cff490-da8a-46c6-b42b-8d987e42b23c", 00:10:09.390 "is_configured": true, 00:10:09.390 "data_offset": 2048, 00:10:09.390 "data_size": 63488 00:10:09.390 } 00:10:09.390 ] 00:10:09.390 }' 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.390 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.650 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.650 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.650 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.650 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.650 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.909 [2024-12-07 02:43:20.744616] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.909 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.909 "name": "Existed_Raid", 00:10:09.909 "uuid": "bb0b275a-c5df-4892-b4da-fff5ee9c2079", 00:10:09.909 "strip_size_kb": 64, 00:10:09.909 "state": "configuring", 00:10:09.909 "raid_level": "raid0", 00:10:09.909 "superblock": true, 00:10:09.909 "num_base_bdevs": 4, 00:10:09.909 "num_base_bdevs_discovered": 2, 00:10:09.909 "num_base_bdevs_operational": 4, 00:10:09.910 "base_bdevs_list": [ 00:10:09.910 { 00:10:09.910 "name": null, 00:10:09.910 "uuid": "a279f4c6-1f82-4a63-be38-884e63375d42", 00:10:09.910 "is_configured": false, 00:10:09.910 "data_offset": 0, 00:10:09.910 "data_size": 63488 00:10:09.910 }, 00:10:09.910 { 00:10:09.910 "name": null, 00:10:09.910 "uuid": "557fbcdc-6121-4c74-8518-8cadd49dc586", 00:10:09.910 "is_configured": false, 00:10:09.910 "data_offset": 0, 00:10:09.910 "data_size": 63488 00:10:09.910 }, 00:10:09.910 { 00:10:09.910 "name": "BaseBdev3", 00:10:09.910 "uuid": "909a635a-fcdc-46b0-a664-ac2a36e7d2b0", 00:10:09.910 "is_configured": true, 00:10:09.910 "data_offset": 2048, 00:10:09.910 "data_size": 63488 00:10:09.910 }, 00:10:09.910 { 00:10:09.910 "name": "BaseBdev4", 00:10:09.910 "uuid": "84cff490-da8a-46c6-b42b-8d987e42b23c", 00:10:09.910 "is_configured": true, 00:10:09.910 "data_offset": 2048, 00:10:09.910 "data_size": 63488 00:10:09.910 } 00:10:09.910 ] 00:10:09.910 }' 00:10:09.910 02:43:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.910 02:43:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.169 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.169 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.169 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.169 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.169 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.429 [2024-12-07 02:43:21.275129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.429 "name": "Existed_Raid", 00:10:10.429 "uuid": "bb0b275a-c5df-4892-b4da-fff5ee9c2079", 00:10:10.429 "strip_size_kb": 64, 00:10:10.429 "state": "configuring", 00:10:10.429 "raid_level": "raid0", 00:10:10.429 "superblock": true, 00:10:10.429 "num_base_bdevs": 4, 00:10:10.429 "num_base_bdevs_discovered": 3, 00:10:10.429 "num_base_bdevs_operational": 4, 00:10:10.429 "base_bdevs_list": [ 00:10:10.429 { 00:10:10.429 "name": null, 00:10:10.429 "uuid": "a279f4c6-1f82-4a63-be38-884e63375d42", 00:10:10.429 "is_configured": false, 00:10:10.429 "data_offset": 0, 00:10:10.429 "data_size": 63488 00:10:10.429 }, 00:10:10.429 { 00:10:10.429 "name": "BaseBdev2", 00:10:10.429 "uuid": "557fbcdc-6121-4c74-8518-8cadd49dc586", 00:10:10.429 "is_configured": true, 00:10:10.429 "data_offset": 2048, 00:10:10.429 "data_size": 63488 00:10:10.429 }, 00:10:10.429 { 00:10:10.429 "name": "BaseBdev3", 00:10:10.429 "uuid": "909a635a-fcdc-46b0-a664-ac2a36e7d2b0", 00:10:10.429 "is_configured": true, 00:10:10.429 "data_offset": 2048, 00:10:10.429 "data_size": 63488 00:10:10.429 }, 00:10:10.429 { 00:10:10.429 "name": "BaseBdev4", 00:10:10.429 "uuid": "84cff490-da8a-46c6-b42b-8d987e42b23c", 00:10:10.429 "is_configured": true, 00:10:10.429 "data_offset": 2048, 00:10:10.429 "data_size": 63488 00:10:10.429 } 00:10:10.429 ] 00:10:10.429 }' 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.429 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a279f4c6-1f82-4a63-be38-884e63375d42 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.688 [2024-12-07 02:43:21.763042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:10.688 [2024-12-07 02:43:21.763319] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:10.688 [2024-12-07 02:43:21.763368] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:10.688 [2024-12-07 02:43:21.763726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:10.688 NewBaseBdev 00:10:10.688 [2024-12-07 02:43:21.763903] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:10.688 [2024-12-07 02:43:21.763951] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:10.688 [2024-12-07 02:43:21.764095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.688 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.948 [ 00:10:10.948 { 00:10:10.948 "name": "NewBaseBdev", 00:10:10.948 "aliases": [ 00:10:10.948 "a279f4c6-1f82-4a63-be38-884e63375d42" 00:10:10.948 ], 00:10:10.948 "product_name": "Malloc disk", 00:10:10.948 "block_size": 512, 00:10:10.948 "num_blocks": 65536, 00:10:10.948 "uuid": "a279f4c6-1f82-4a63-be38-884e63375d42", 00:10:10.948 "assigned_rate_limits": { 00:10:10.948 "rw_ios_per_sec": 0, 00:10:10.948 "rw_mbytes_per_sec": 0, 00:10:10.948 "r_mbytes_per_sec": 0, 00:10:10.948 "w_mbytes_per_sec": 0 00:10:10.948 }, 00:10:10.948 "claimed": true, 00:10:10.948 "claim_type": "exclusive_write", 00:10:10.948 "zoned": false, 00:10:10.948 "supported_io_types": { 00:10:10.948 "read": true, 00:10:10.948 "write": true, 00:10:10.948 "unmap": true, 00:10:10.948 "flush": true, 00:10:10.948 "reset": true, 00:10:10.948 "nvme_admin": false, 00:10:10.948 "nvme_io": false, 00:10:10.948 "nvme_io_md": false, 00:10:10.948 "write_zeroes": true, 00:10:10.948 "zcopy": true, 00:10:10.948 "get_zone_info": false, 00:10:10.948 "zone_management": false, 00:10:10.948 "zone_append": false, 00:10:10.948 "compare": false, 00:10:10.948 "compare_and_write": false, 00:10:10.948 "abort": true, 00:10:10.948 "seek_hole": false, 00:10:10.948 "seek_data": false, 00:10:10.948 "copy": true, 00:10:10.948 "nvme_iov_md": false 00:10:10.948 }, 00:10:10.948 "memory_domains": [ 00:10:10.948 { 00:10:10.948 "dma_device_id": "system", 00:10:10.948 "dma_device_type": 1 00:10:10.948 }, 00:10:10.948 { 00:10:10.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.948 "dma_device_type": 2 00:10:10.948 } 00:10:10.948 ], 00:10:10.948 "driver_specific": {} 00:10:10.948 } 00:10:10.948 ] 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.948 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.948 "name": "Existed_Raid", 00:10:10.948 "uuid": "bb0b275a-c5df-4892-b4da-fff5ee9c2079", 00:10:10.948 "strip_size_kb": 64, 00:10:10.948 "state": "online", 00:10:10.948 "raid_level": "raid0", 00:10:10.948 "superblock": true, 00:10:10.948 "num_base_bdevs": 4, 00:10:10.948 "num_base_bdevs_discovered": 4, 00:10:10.948 "num_base_bdevs_operational": 4, 00:10:10.948 "base_bdevs_list": [ 00:10:10.948 { 00:10:10.948 "name": "NewBaseBdev", 00:10:10.948 "uuid": "a279f4c6-1f82-4a63-be38-884e63375d42", 00:10:10.948 "is_configured": true, 00:10:10.948 "data_offset": 2048, 00:10:10.949 "data_size": 63488 00:10:10.949 }, 00:10:10.949 { 00:10:10.949 "name": "BaseBdev2", 00:10:10.949 "uuid": "557fbcdc-6121-4c74-8518-8cadd49dc586", 00:10:10.949 "is_configured": true, 00:10:10.949 "data_offset": 2048, 00:10:10.949 "data_size": 63488 00:10:10.949 }, 00:10:10.949 { 00:10:10.949 "name": "BaseBdev3", 00:10:10.949 "uuid": "909a635a-fcdc-46b0-a664-ac2a36e7d2b0", 00:10:10.949 "is_configured": true, 00:10:10.949 "data_offset": 2048, 00:10:10.949 "data_size": 63488 00:10:10.949 }, 00:10:10.949 { 00:10:10.949 "name": "BaseBdev4", 00:10:10.949 "uuid": "84cff490-da8a-46c6-b42b-8d987e42b23c", 00:10:10.949 "is_configured": true, 00:10:10.949 "data_offset": 2048, 00:10:10.949 "data_size": 63488 00:10:10.949 } 00:10:10.949 ] 00:10:10.949 }' 00:10:10.949 02:43:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.949 02:43:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.207 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.207 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.207 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.207 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.207 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.207 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.207 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.207 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.207 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.207 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.207 [2024-12-07 02:43:22.214713] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.207 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.207 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.207 "name": "Existed_Raid", 00:10:11.207 "aliases": [ 00:10:11.207 "bb0b275a-c5df-4892-b4da-fff5ee9c2079" 00:10:11.207 ], 00:10:11.207 "product_name": "Raid Volume", 00:10:11.207 "block_size": 512, 00:10:11.207 "num_blocks": 253952, 00:10:11.207 "uuid": "bb0b275a-c5df-4892-b4da-fff5ee9c2079", 00:10:11.207 "assigned_rate_limits": { 00:10:11.208 "rw_ios_per_sec": 0, 00:10:11.208 "rw_mbytes_per_sec": 0, 00:10:11.208 "r_mbytes_per_sec": 0, 00:10:11.208 "w_mbytes_per_sec": 0 00:10:11.208 }, 00:10:11.208 "claimed": false, 00:10:11.208 "zoned": false, 00:10:11.208 "supported_io_types": { 00:10:11.208 "read": true, 00:10:11.208 "write": true, 00:10:11.208 "unmap": true, 00:10:11.208 "flush": true, 00:10:11.208 "reset": true, 00:10:11.208 "nvme_admin": false, 00:10:11.208 "nvme_io": false, 00:10:11.208 "nvme_io_md": false, 00:10:11.208 "write_zeroes": true, 00:10:11.208 "zcopy": false, 00:10:11.208 "get_zone_info": false, 00:10:11.208 "zone_management": false, 00:10:11.208 "zone_append": false, 00:10:11.208 "compare": false, 00:10:11.208 "compare_and_write": false, 00:10:11.208 "abort": false, 00:10:11.208 "seek_hole": false, 00:10:11.208 "seek_data": false, 00:10:11.208 "copy": false, 00:10:11.208 "nvme_iov_md": false 00:10:11.208 }, 00:10:11.208 "memory_domains": [ 00:10:11.208 { 00:10:11.208 "dma_device_id": "system", 00:10:11.208 "dma_device_type": 1 00:10:11.208 }, 00:10:11.208 { 00:10:11.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.208 "dma_device_type": 2 00:10:11.208 }, 00:10:11.208 { 00:10:11.208 "dma_device_id": "system", 00:10:11.208 "dma_device_type": 1 00:10:11.208 }, 00:10:11.208 { 00:10:11.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.208 "dma_device_type": 2 00:10:11.208 }, 00:10:11.208 { 00:10:11.208 "dma_device_id": "system", 00:10:11.208 "dma_device_type": 1 00:10:11.208 }, 00:10:11.208 { 00:10:11.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.208 "dma_device_type": 2 00:10:11.208 }, 00:10:11.208 { 00:10:11.208 "dma_device_id": "system", 00:10:11.208 "dma_device_type": 1 00:10:11.208 }, 00:10:11.208 { 00:10:11.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.208 "dma_device_type": 2 00:10:11.208 } 00:10:11.208 ], 00:10:11.208 "driver_specific": { 00:10:11.208 "raid": { 00:10:11.208 "uuid": "bb0b275a-c5df-4892-b4da-fff5ee9c2079", 00:10:11.208 "strip_size_kb": 64, 00:10:11.208 "state": "online", 00:10:11.208 "raid_level": "raid0", 00:10:11.208 "superblock": true, 00:10:11.208 "num_base_bdevs": 4, 00:10:11.208 "num_base_bdevs_discovered": 4, 00:10:11.208 "num_base_bdevs_operational": 4, 00:10:11.208 "base_bdevs_list": [ 00:10:11.208 { 00:10:11.208 "name": "NewBaseBdev", 00:10:11.208 "uuid": "a279f4c6-1f82-4a63-be38-884e63375d42", 00:10:11.208 "is_configured": true, 00:10:11.208 "data_offset": 2048, 00:10:11.208 "data_size": 63488 00:10:11.208 }, 00:10:11.208 { 00:10:11.208 "name": "BaseBdev2", 00:10:11.208 "uuid": "557fbcdc-6121-4c74-8518-8cadd49dc586", 00:10:11.208 "is_configured": true, 00:10:11.208 "data_offset": 2048, 00:10:11.208 "data_size": 63488 00:10:11.208 }, 00:10:11.208 { 00:10:11.208 "name": "BaseBdev3", 00:10:11.208 "uuid": "909a635a-fcdc-46b0-a664-ac2a36e7d2b0", 00:10:11.208 "is_configured": true, 00:10:11.208 "data_offset": 2048, 00:10:11.208 "data_size": 63488 00:10:11.208 }, 00:10:11.208 { 00:10:11.208 "name": "BaseBdev4", 00:10:11.208 "uuid": "84cff490-da8a-46c6-b42b-8d987e42b23c", 00:10:11.208 "is_configured": true, 00:10:11.208 "data_offset": 2048, 00:10:11.208 "data_size": 63488 00:10:11.208 } 00:10:11.208 ] 00:10:11.208 } 00:10:11.208 } 00:10:11.208 }' 00:10:11.208 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:11.466 BaseBdev2 00:10:11.466 BaseBdev3 00:10:11.466 BaseBdev4' 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.466 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.725 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.725 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.725 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.725 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.725 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.725 [2024-12-07 02:43:22.553757] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.725 [2024-12-07 02:43:22.553831] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.725 [2024-12-07 02:43:22.553933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.725 [2024-12-07 02:43:22.554023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.725 [2024-12-07 02:43:22.554069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:11.725 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.725 02:43:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 81232 00:10:11.725 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 81232 ']' 00:10:11.725 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 81232 00:10:11.725 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:11.725 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.725 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81232 00:10:11.726 killing process with pid 81232 00:10:11.726 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.726 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.726 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81232' 00:10:11.726 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 81232 00:10:11.726 [2024-12-07 02:43:22.584959] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.726 02:43:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 81232 00:10:11.726 [2024-12-07 02:43:22.662381] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.984 02:43:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:11.984 00:10:11.984 real 0m9.897s 00:10:11.984 user 0m16.599s 00:10:11.984 sys 0m2.136s 00:10:11.984 02:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.984 02:43:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.985 ************************************ 00:10:11.985 END TEST raid_state_function_test_sb 00:10:11.985 ************************************ 00:10:12.243 02:43:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:12.243 02:43:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:12.243 02:43:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.243 02:43:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.243 ************************************ 00:10:12.243 START TEST raid_superblock_test 00:10:12.243 ************************************ 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81886 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81886 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81886 ']' 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.243 02:43:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.243 [2024-12-07 02:43:23.199065] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:12.243 [2024-12-07 02:43:23.199272] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81886 ] 00:10:12.502 [2024-12-07 02:43:23.384854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.502 [2024-12-07 02:43:23.454445] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.502 [2024-12-07 02:43:23.530635] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.502 [2024-12-07 02:43:23.530677] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.070 malloc1 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.070 [2024-12-07 02:43:24.048656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:13.070 [2024-12-07 02:43:24.048805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.070 [2024-12-07 02:43:24.048846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:13.070 [2024-12-07 02:43:24.048882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.070 [2024-12-07 02:43:24.051246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.070 [2024-12-07 02:43:24.051320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:13.070 pt1 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:13.070 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.071 malloc2 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.071 [2024-12-07 02:43:24.097514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:13.071 [2024-12-07 02:43:24.097597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.071 [2024-12-07 02:43:24.097619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:13.071 [2024-12-07 02:43:24.097633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.071 [2024-12-07 02:43:24.100516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.071 [2024-12-07 02:43:24.100612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:13.071 pt2 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.071 malloc3 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.071 [2024-12-07 02:43:24.132084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:13.071 [2024-12-07 02:43:24.132189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.071 [2024-12-07 02:43:24.132223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:13.071 [2024-12-07 02:43:24.132257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.071 [2024-12-07 02:43:24.134534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.071 [2024-12-07 02:43:24.134618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:13.071 pt3 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.071 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.330 malloc4 00:10:13.330 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.330 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:13.330 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.330 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.330 [2024-12-07 02:43:24.170651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:13.330 [2024-12-07 02:43:24.170754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.330 [2024-12-07 02:43:24.170801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:13.330 [2024-12-07 02:43:24.170835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.330 [2024-12-07 02:43:24.173147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.330 [2024-12-07 02:43:24.173215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:13.330 pt4 00:10:13.330 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.330 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:13.330 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:13.330 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.331 [2024-12-07 02:43:24.182712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:13.331 [2024-12-07 02:43:24.184720] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:13.331 [2024-12-07 02:43:24.184777] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:13.331 [2024-12-07 02:43:24.184837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:13.331 [2024-12-07 02:43:24.184993] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:13.331 [2024-12-07 02:43:24.185005] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:13.331 [2024-12-07 02:43:24.185222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:13.331 [2024-12-07 02:43:24.185358] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:13.331 [2024-12-07 02:43:24.185367] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:13.331 [2024-12-07 02:43:24.185490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.331 "name": "raid_bdev1", 00:10:13.331 "uuid": "43c95f05-1c96-467e-988f-e83cf68cd6aa", 00:10:13.331 "strip_size_kb": 64, 00:10:13.331 "state": "online", 00:10:13.331 "raid_level": "raid0", 00:10:13.331 "superblock": true, 00:10:13.331 "num_base_bdevs": 4, 00:10:13.331 "num_base_bdevs_discovered": 4, 00:10:13.331 "num_base_bdevs_operational": 4, 00:10:13.331 "base_bdevs_list": [ 00:10:13.331 { 00:10:13.331 "name": "pt1", 00:10:13.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.331 "is_configured": true, 00:10:13.331 "data_offset": 2048, 00:10:13.331 "data_size": 63488 00:10:13.331 }, 00:10:13.331 { 00:10:13.331 "name": "pt2", 00:10:13.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.331 "is_configured": true, 00:10:13.331 "data_offset": 2048, 00:10:13.331 "data_size": 63488 00:10:13.331 }, 00:10:13.331 { 00:10:13.331 "name": "pt3", 00:10:13.331 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.331 "is_configured": true, 00:10:13.331 "data_offset": 2048, 00:10:13.331 "data_size": 63488 00:10:13.331 }, 00:10:13.331 { 00:10:13.331 "name": "pt4", 00:10:13.331 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.331 "is_configured": true, 00:10:13.331 "data_offset": 2048, 00:10:13.331 "data_size": 63488 00:10:13.331 } 00:10:13.331 ] 00:10:13.331 }' 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.331 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.590 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:13.590 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:13.590 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:13.590 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:13.590 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:13.590 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:13.590 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:13.590 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.590 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.590 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.590 [2024-12-07 02:43:24.646225] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.849 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.849 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.849 "name": "raid_bdev1", 00:10:13.849 "aliases": [ 00:10:13.849 "43c95f05-1c96-467e-988f-e83cf68cd6aa" 00:10:13.849 ], 00:10:13.849 "product_name": "Raid Volume", 00:10:13.849 "block_size": 512, 00:10:13.849 "num_blocks": 253952, 00:10:13.849 "uuid": "43c95f05-1c96-467e-988f-e83cf68cd6aa", 00:10:13.849 "assigned_rate_limits": { 00:10:13.849 "rw_ios_per_sec": 0, 00:10:13.849 "rw_mbytes_per_sec": 0, 00:10:13.849 "r_mbytes_per_sec": 0, 00:10:13.849 "w_mbytes_per_sec": 0 00:10:13.849 }, 00:10:13.849 "claimed": false, 00:10:13.849 "zoned": false, 00:10:13.849 "supported_io_types": { 00:10:13.849 "read": true, 00:10:13.849 "write": true, 00:10:13.849 "unmap": true, 00:10:13.849 "flush": true, 00:10:13.849 "reset": true, 00:10:13.849 "nvme_admin": false, 00:10:13.849 "nvme_io": false, 00:10:13.849 "nvme_io_md": false, 00:10:13.849 "write_zeroes": true, 00:10:13.849 "zcopy": false, 00:10:13.849 "get_zone_info": false, 00:10:13.849 "zone_management": false, 00:10:13.849 "zone_append": false, 00:10:13.849 "compare": false, 00:10:13.850 "compare_and_write": false, 00:10:13.850 "abort": false, 00:10:13.850 "seek_hole": false, 00:10:13.850 "seek_data": false, 00:10:13.850 "copy": false, 00:10:13.850 "nvme_iov_md": false 00:10:13.850 }, 00:10:13.850 "memory_domains": [ 00:10:13.850 { 00:10:13.850 "dma_device_id": "system", 00:10:13.850 "dma_device_type": 1 00:10:13.850 }, 00:10:13.850 { 00:10:13.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.850 "dma_device_type": 2 00:10:13.850 }, 00:10:13.850 { 00:10:13.850 "dma_device_id": "system", 00:10:13.850 "dma_device_type": 1 00:10:13.850 }, 00:10:13.850 { 00:10:13.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.850 "dma_device_type": 2 00:10:13.850 }, 00:10:13.850 { 00:10:13.850 "dma_device_id": "system", 00:10:13.850 "dma_device_type": 1 00:10:13.850 }, 00:10:13.850 { 00:10:13.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.850 "dma_device_type": 2 00:10:13.850 }, 00:10:13.850 { 00:10:13.850 "dma_device_id": "system", 00:10:13.850 "dma_device_type": 1 00:10:13.850 }, 00:10:13.850 { 00:10:13.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.850 "dma_device_type": 2 00:10:13.850 } 00:10:13.850 ], 00:10:13.850 "driver_specific": { 00:10:13.850 "raid": { 00:10:13.850 "uuid": "43c95f05-1c96-467e-988f-e83cf68cd6aa", 00:10:13.850 "strip_size_kb": 64, 00:10:13.850 "state": "online", 00:10:13.850 "raid_level": "raid0", 00:10:13.850 "superblock": true, 00:10:13.850 "num_base_bdevs": 4, 00:10:13.850 "num_base_bdevs_discovered": 4, 00:10:13.850 "num_base_bdevs_operational": 4, 00:10:13.850 "base_bdevs_list": [ 00:10:13.850 { 00:10:13.850 "name": "pt1", 00:10:13.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.850 "is_configured": true, 00:10:13.850 "data_offset": 2048, 00:10:13.850 "data_size": 63488 00:10:13.850 }, 00:10:13.850 { 00:10:13.850 "name": "pt2", 00:10:13.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.850 "is_configured": true, 00:10:13.850 "data_offset": 2048, 00:10:13.850 "data_size": 63488 00:10:13.850 }, 00:10:13.850 { 00:10:13.850 "name": "pt3", 00:10:13.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.850 "is_configured": true, 00:10:13.850 "data_offset": 2048, 00:10:13.850 "data_size": 63488 00:10:13.850 }, 00:10:13.850 { 00:10:13.850 "name": "pt4", 00:10:13.850 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.850 "is_configured": true, 00:10:13.850 "data_offset": 2048, 00:10:13.850 "data_size": 63488 00:10:13.850 } 00:10:13.850 ] 00:10:13.850 } 00:10:13.850 } 00:10:13.850 }' 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:13.850 pt2 00:10:13.850 pt3 00:10:13.850 pt4' 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.850 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.110 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.110 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.110 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:14.110 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.110 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:14.110 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.110 [2024-12-07 02:43:24.949567] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.110 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.110 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=43c95f05-1c96-467e-988f-e83cf68cd6aa 00:10:14.110 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 43c95f05-1c96-467e-988f-e83cf68cd6aa ']' 00:10:14.110 02:43:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:14.110 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.110 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.110 [2024-12-07 02:43:24.993232] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.110 [2024-12-07 02:43:24.993305] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.110 [2024-12-07 02:43:24.993414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.110 [2024-12-07 02:43:24.993509] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.110 [2024-12-07 02:43:24.993562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:14.110 02:43:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.110 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.110 [2024-12-07 02:43:25.140992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:14.110 [2024-12-07 02:43:25.143184] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:14.110 [2024-12-07 02:43:25.143275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:14.110 [2024-12-07 02:43:25.143322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:14.110 [2024-12-07 02:43:25.143389] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:14.110 [2024-12-07 02:43:25.143466] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:14.110 [2024-12-07 02:43:25.143517] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:14.110 [2024-12-07 02:43:25.143535] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:14.110 [2024-12-07 02:43:25.143549] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:14.110 [2024-12-07 02:43:25.143558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:14.110 request: 00:10:14.111 { 00:10:14.111 "name": "raid_bdev1", 00:10:14.111 "raid_level": "raid0", 00:10:14.111 "base_bdevs": [ 00:10:14.111 "malloc1", 00:10:14.111 "malloc2", 00:10:14.111 "malloc3", 00:10:14.111 "malloc4" 00:10:14.111 ], 00:10:14.111 "strip_size_kb": 64, 00:10:14.111 "superblock": false, 00:10:14.111 "method": "bdev_raid_create", 00:10:14.111 "req_id": 1 00:10:14.111 } 00:10:14.111 Got JSON-RPC error response 00:10:14.111 response: 00:10:14.111 { 00:10:14.111 "code": -17, 00:10:14.111 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:14.111 } 00:10:14.111 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:14.111 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:14.111 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:14.111 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:14.111 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:14.111 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.111 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.111 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.111 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:14.111 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.370 [2024-12-07 02:43:25.196840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:14.370 [2024-12-07 02:43:25.196933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.370 [2024-12-07 02:43:25.196987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:14.370 [2024-12-07 02:43:25.197015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.370 [2024-12-07 02:43:25.199455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.370 [2024-12-07 02:43:25.199532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:14.370 [2024-12-07 02:43:25.199633] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:14.370 [2024-12-07 02:43:25.199693] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:14.370 pt1 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.370 "name": "raid_bdev1", 00:10:14.370 "uuid": "43c95f05-1c96-467e-988f-e83cf68cd6aa", 00:10:14.370 "strip_size_kb": 64, 00:10:14.370 "state": "configuring", 00:10:14.370 "raid_level": "raid0", 00:10:14.370 "superblock": true, 00:10:14.370 "num_base_bdevs": 4, 00:10:14.370 "num_base_bdevs_discovered": 1, 00:10:14.370 "num_base_bdevs_operational": 4, 00:10:14.370 "base_bdevs_list": [ 00:10:14.370 { 00:10:14.370 "name": "pt1", 00:10:14.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.370 "is_configured": true, 00:10:14.370 "data_offset": 2048, 00:10:14.370 "data_size": 63488 00:10:14.370 }, 00:10:14.370 { 00:10:14.370 "name": null, 00:10:14.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.370 "is_configured": false, 00:10:14.370 "data_offset": 2048, 00:10:14.370 "data_size": 63488 00:10:14.370 }, 00:10:14.370 { 00:10:14.370 "name": null, 00:10:14.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.370 "is_configured": false, 00:10:14.370 "data_offset": 2048, 00:10:14.370 "data_size": 63488 00:10:14.370 }, 00:10:14.370 { 00:10:14.370 "name": null, 00:10:14.370 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:14.370 "is_configured": false, 00:10:14.370 "data_offset": 2048, 00:10:14.370 "data_size": 63488 00:10:14.370 } 00:10:14.370 ] 00:10:14.370 }' 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.370 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.629 [2024-12-07 02:43:25.640138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:14.629 [2024-12-07 02:43:25.640202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.629 [2024-12-07 02:43:25.640228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:14.629 [2024-12-07 02:43:25.640238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.629 [2024-12-07 02:43:25.640711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.629 [2024-12-07 02:43:25.640729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:14.629 [2024-12-07 02:43:25.640810] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:14.629 [2024-12-07 02:43:25.640833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.629 pt2 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.629 [2024-12-07 02:43:25.652122] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.629 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.887 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.888 "name": "raid_bdev1", 00:10:14.888 "uuid": "43c95f05-1c96-467e-988f-e83cf68cd6aa", 00:10:14.888 "strip_size_kb": 64, 00:10:14.888 "state": "configuring", 00:10:14.888 "raid_level": "raid0", 00:10:14.888 "superblock": true, 00:10:14.888 "num_base_bdevs": 4, 00:10:14.888 "num_base_bdevs_discovered": 1, 00:10:14.888 "num_base_bdevs_operational": 4, 00:10:14.888 "base_bdevs_list": [ 00:10:14.888 { 00:10:14.888 "name": "pt1", 00:10:14.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.888 "is_configured": true, 00:10:14.888 "data_offset": 2048, 00:10:14.888 "data_size": 63488 00:10:14.888 }, 00:10:14.888 { 00:10:14.888 "name": null, 00:10:14.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.888 "is_configured": false, 00:10:14.888 "data_offset": 0, 00:10:14.888 "data_size": 63488 00:10:14.888 }, 00:10:14.888 { 00:10:14.888 "name": null, 00:10:14.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.888 "is_configured": false, 00:10:14.888 "data_offset": 2048, 00:10:14.888 "data_size": 63488 00:10:14.888 }, 00:10:14.888 { 00:10:14.888 "name": null, 00:10:14.888 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:14.888 "is_configured": false, 00:10:14.888 "data_offset": 2048, 00:10:14.888 "data_size": 63488 00:10:14.888 } 00:10:14.888 ] 00:10:14.888 }' 00:10:14.888 02:43:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.888 02:43:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.147 [2024-12-07 02:43:26.119377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:15.147 [2024-12-07 02:43:26.119549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.147 [2024-12-07 02:43:26.119588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:15.147 [2024-12-07 02:43:26.119633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.147 [2024-12-07 02:43:26.120128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.147 [2024-12-07 02:43:26.120186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:15.147 [2024-12-07 02:43:26.120296] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:15.147 [2024-12-07 02:43:26.120349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:15.147 pt2 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.147 [2024-12-07 02:43:26.131280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:15.147 [2024-12-07 02:43:26.131388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.147 [2024-12-07 02:43:26.131422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:15.147 [2024-12-07 02:43:26.131451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.147 [2024-12-07 02:43:26.131858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.147 [2024-12-07 02:43:26.131912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:15.147 [2024-12-07 02:43:26.131998] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:15.147 [2024-12-07 02:43:26.132046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:15.147 pt3 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.147 [2024-12-07 02:43:26.143259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:15.147 [2024-12-07 02:43:26.143326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.147 [2024-12-07 02:43:26.143341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:15.147 [2024-12-07 02:43:26.143350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.147 [2024-12-07 02:43:26.143683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.147 [2024-12-07 02:43:26.143702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:15.147 [2024-12-07 02:43:26.143752] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:15.147 [2024-12-07 02:43:26.143773] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:15.147 [2024-12-07 02:43:26.143872] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:15.147 [2024-12-07 02:43:26.143886] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:15.147 [2024-12-07 02:43:26.144123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:15.147 [2024-12-07 02:43:26.144246] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:15.147 [2024-12-07 02:43:26.144255] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:15.147 [2024-12-07 02:43:26.144353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.147 pt4 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.147 "name": "raid_bdev1", 00:10:15.147 "uuid": "43c95f05-1c96-467e-988f-e83cf68cd6aa", 00:10:15.147 "strip_size_kb": 64, 00:10:15.147 "state": "online", 00:10:15.147 "raid_level": "raid0", 00:10:15.147 "superblock": true, 00:10:15.147 "num_base_bdevs": 4, 00:10:15.147 "num_base_bdevs_discovered": 4, 00:10:15.147 "num_base_bdevs_operational": 4, 00:10:15.147 "base_bdevs_list": [ 00:10:15.147 { 00:10:15.147 "name": "pt1", 00:10:15.147 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.147 "is_configured": true, 00:10:15.147 "data_offset": 2048, 00:10:15.147 "data_size": 63488 00:10:15.147 }, 00:10:15.147 { 00:10:15.147 "name": "pt2", 00:10:15.147 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.147 "is_configured": true, 00:10:15.147 "data_offset": 2048, 00:10:15.147 "data_size": 63488 00:10:15.147 }, 00:10:15.147 { 00:10:15.147 "name": "pt3", 00:10:15.147 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.147 "is_configured": true, 00:10:15.147 "data_offset": 2048, 00:10:15.147 "data_size": 63488 00:10:15.147 }, 00:10:15.147 { 00:10:15.147 "name": "pt4", 00:10:15.147 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:15.147 "is_configured": true, 00:10:15.147 "data_offset": 2048, 00:10:15.147 "data_size": 63488 00:10:15.147 } 00:10:15.147 ] 00:10:15.147 }' 00:10:15.147 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.148 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.716 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:15.716 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:15.716 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.716 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.716 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.716 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.716 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.716 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.716 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.716 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.716 [2024-12-07 02:43:26.598820] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.716 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.716 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.716 "name": "raid_bdev1", 00:10:15.716 "aliases": [ 00:10:15.716 "43c95f05-1c96-467e-988f-e83cf68cd6aa" 00:10:15.716 ], 00:10:15.716 "product_name": "Raid Volume", 00:10:15.716 "block_size": 512, 00:10:15.716 "num_blocks": 253952, 00:10:15.716 "uuid": "43c95f05-1c96-467e-988f-e83cf68cd6aa", 00:10:15.716 "assigned_rate_limits": { 00:10:15.716 "rw_ios_per_sec": 0, 00:10:15.716 "rw_mbytes_per_sec": 0, 00:10:15.716 "r_mbytes_per_sec": 0, 00:10:15.716 "w_mbytes_per_sec": 0 00:10:15.716 }, 00:10:15.716 "claimed": false, 00:10:15.716 "zoned": false, 00:10:15.716 "supported_io_types": { 00:10:15.716 "read": true, 00:10:15.716 "write": true, 00:10:15.716 "unmap": true, 00:10:15.716 "flush": true, 00:10:15.716 "reset": true, 00:10:15.716 "nvme_admin": false, 00:10:15.716 "nvme_io": false, 00:10:15.716 "nvme_io_md": false, 00:10:15.716 "write_zeroes": true, 00:10:15.716 "zcopy": false, 00:10:15.716 "get_zone_info": false, 00:10:15.716 "zone_management": false, 00:10:15.716 "zone_append": false, 00:10:15.716 "compare": false, 00:10:15.716 "compare_and_write": false, 00:10:15.716 "abort": false, 00:10:15.716 "seek_hole": false, 00:10:15.716 "seek_data": false, 00:10:15.716 "copy": false, 00:10:15.716 "nvme_iov_md": false 00:10:15.716 }, 00:10:15.716 "memory_domains": [ 00:10:15.716 { 00:10:15.716 "dma_device_id": "system", 00:10:15.716 "dma_device_type": 1 00:10:15.716 }, 00:10:15.716 { 00:10:15.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.716 "dma_device_type": 2 00:10:15.716 }, 00:10:15.716 { 00:10:15.716 "dma_device_id": "system", 00:10:15.716 "dma_device_type": 1 00:10:15.716 }, 00:10:15.716 { 00:10:15.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.716 "dma_device_type": 2 00:10:15.716 }, 00:10:15.716 { 00:10:15.716 "dma_device_id": "system", 00:10:15.716 "dma_device_type": 1 00:10:15.716 }, 00:10:15.716 { 00:10:15.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.716 "dma_device_type": 2 00:10:15.716 }, 00:10:15.716 { 00:10:15.716 "dma_device_id": "system", 00:10:15.716 "dma_device_type": 1 00:10:15.716 }, 00:10:15.716 { 00:10:15.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.716 "dma_device_type": 2 00:10:15.716 } 00:10:15.716 ], 00:10:15.716 "driver_specific": { 00:10:15.716 "raid": { 00:10:15.716 "uuid": "43c95f05-1c96-467e-988f-e83cf68cd6aa", 00:10:15.716 "strip_size_kb": 64, 00:10:15.716 "state": "online", 00:10:15.716 "raid_level": "raid0", 00:10:15.716 "superblock": true, 00:10:15.716 "num_base_bdevs": 4, 00:10:15.716 "num_base_bdevs_discovered": 4, 00:10:15.716 "num_base_bdevs_operational": 4, 00:10:15.716 "base_bdevs_list": [ 00:10:15.716 { 00:10:15.716 "name": "pt1", 00:10:15.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.716 "is_configured": true, 00:10:15.716 "data_offset": 2048, 00:10:15.716 "data_size": 63488 00:10:15.716 }, 00:10:15.716 { 00:10:15.716 "name": "pt2", 00:10:15.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.716 "is_configured": true, 00:10:15.716 "data_offset": 2048, 00:10:15.716 "data_size": 63488 00:10:15.716 }, 00:10:15.716 { 00:10:15.716 "name": "pt3", 00:10:15.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.716 "is_configured": true, 00:10:15.716 "data_offset": 2048, 00:10:15.716 "data_size": 63488 00:10:15.716 }, 00:10:15.716 { 00:10:15.716 "name": "pt4", 00:10:15.716 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:15.716 "is_configured": true, 00:10:15.716 "data_offset": 2048, 00:10:15.716 "data_size": 63488 00:10:15.716 } 00:10:15.717 ] 00:10:15.717 } 00:10:15.717 } 00:10:15.717 }' 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:15.717 pt2 00:10:15.717 pt3 00:10:15.717 pt4' 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:15.717 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:15.977 [2024-12-07 02:43:26.870281] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 43c95f05-1c96-467e-988f-e83cf68cd6aa '!=' 43c95f05-1c96-467e-988f-e83cf68cd6aa ']' 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81886 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81886 ']' 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81886 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81886 00:10:15.977 killing process with pid 81886 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81886' 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81886 00:10:15.977 [2024-12-07 02:43:26.954439] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.977 [2024-12-07 02:43:26.954559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.977 02:43:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81886 00:10:15.977 [2024-12-07 02:43:26.954653] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.977 [2024-12-07 02:43:26.954669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:15.977 [2024-12-07 02:43:27.035661] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.560 02:43:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:16.561 ************************************ 00:10:16.561 END TEST raid_superblock_test 00:10:16.561 ************************************ 00:10:16.561 00:10:16.561 real 0m4.301s 00:10:16.561 user 0m6.525s 00:10:16.561 sys 0m1.018s 00:10:16.561 02:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.561 02:43:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.561 02:43:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:16.561 02:43:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:16.561 02:43:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.561 02:43:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.561 ************************************ 00:10:16.561 START TEST raid_read_error_test 00:10:16.561 ************************************ 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6qzXgsOtPl 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82134 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82134 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 82134 ']' 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.561 02:43:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.561 [2024-12-07 02:43:27.596729] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:16.561 [2024-12-07 02:43:27.596973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82134 ] 00:10:16.834 [2024-12-07 02:43:27.762255] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.834 [2024-12-07 02:43:27.835636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.094 [2024-12-07 02:43:27.912537] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.094 [2024-12-07 02:43:27.912718] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.355 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.355 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:17.355 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:17.355 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:17.355 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.355 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.615 BaseBdev1_malloc 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.615 true 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.615 [2024-12-07 02:43:28.455708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:17.615 [2024-12-07 02:43:28.455763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.615 [2024-12-07 02:43:28.455783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:17.615 [2024-12-07 02:43:28.455792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.615 [2024-12-07 02:43:28.458203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.615 [2024-12-07 02:43:28.458240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:17.615 BaseBdev1 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.615 BaseBdev2_malloc 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.615 true 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.615 [2024-12-07 02:43:28.511554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:17.615 [2024-12-07 02:43:28.511615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.615 [2024-12-07 02:43:28.511637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:17.615 [2024-12-07 02:43:28.511645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.615 [2024-12-07 02:43:28.513941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.615 [2024-12-07 02:43:28.514023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:17.615 BaseBdev2 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.615 BaseBdev3_malloc 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.615 true 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.615 [2024-12-07 02:43:28.558089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:17.615 [2024-12-07 02:43:28.558137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.615 [2024-12-07 02:43:28.558171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:17.615 [2024-12-07 02:43:28.558180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.615 [2024-12-07 02:43:28.560487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.615 [2024-12-07 02:43:28.560522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:17.615 BaseBdev3 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:17.615 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.616 BaseBdev4_malloc 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.616 true 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.616 [2024-12-07 02:43:28.604445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:17.616 [2024-12-07 02:43:28.604531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.616 [2024-12-07 02:43:28.604556] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:17.616 [2024-12-07 02:43:28.604566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.616 [2024-12-07 02:43:28.606836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.616 [2024-12-07 02:43:28.606873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:17.616 BaseBdev4 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.616 [2024-12-07 02:43:28.616483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.616 [2024-12-07 02:43:28.618545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.616 [2024-12-07 02:43:28.618637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.616 [2024-12-07 02:43:28.618688] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:17.616 [2024-12-07 02:43:28.618871] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:17.616 [2024-12-07 02:43:28.618882] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:17.616 [2024-12-07 02:43:28.619128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:17.616 [2024-12-07 02:43:28.619252] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:17.616 [2024-12-07 02:43:28.619266] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:17.616 [2024-12-07 02:43:28.619385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.616 "name": "raid_bdev1", 00:10:17.616 "uuid": "8c7d5dad-3b2e-42da-95c9-23715db22adf", 00:10:17.616 "strip_size_kb": 64, 00:10:17.616 "state": "online", 00:10:17.616 "raid_level": "raid0", 00:10:17.616 "superblock": true, 00:10:17.616 "num_base_bdevs": 4, 00:10:17.616 "num_base_bdevs_discovered": 4, 00:10:17.616 "num_base_bdevs_operational": 4, 00:10:17.616 "base_bdevs_list": [ 00:10:17.616 { 00:10:17.616 "name": "BaseBdev1", 00:10:17.616 "uuid": "3180ef2d-825a-56ba-af72-5226133433f2", 00:10:17.616 "is_configured": true, 00:10:17.616 "data_offset": 2048, 00:10:17.616 "data_size": 63488 00:10:17.616 }, 00:10:17.616 { 00:10:17.616 "name": "BaseBdev2", 00:10:17.616 "uuid": "c2173685-23d1-5be4-825a-f59a214c1c43", 00:10:17.616 "is_configured": true, 00:10:17.616 "data_offset": 2048, 00:10:17.616 "data_size": 63488 00:10:17.616 }, 00:10:17.616 { 00:10:17.616 "name": "BaseBdev3", 00:10:17.616 "uuid": "ed91ddfc-3f14-5685-8fa8-ecdf8be6e76b", 00:10:17.616 "is_configured": true, 00:10:17.616 "data_offset": 2048, 00:10:17.616 "data_size": 63488 00:10:17.616 }, 00:10:17.616 { 00:10:17.616 "name": "BaseBdev4", 00:10:17.616 "uuid": "92be099c-8eec-5efe-a98a-fd700577d85a", 00:10:17.616 "is_configured": true, 00:10:17.616 "data_offset": 2048, 00:10:17.616 "data_size": 63488 00:10:17.616 } 00:10:17.616 ] 00:10:17.616 }' 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.616 02:43:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.186 02:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:18.186 02:43:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:18.186 [2024-12-07 02:43:29.179975] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.127 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.127 "name": "raid_bdev1", 00:10:19.127 "uuid": "8c7d5dad-3b2e-42da-95c9-23715db22adf", 00:10:19.127 "strip_size_kb": 64, 00:10:19.127 "state": "online", 00:10:19.127 "raid_level": "raid0", 00:10:19.127 "superblock": true, 00:10:19.127 "num_base_bdevs": 4, 00:10:19.127 "num_base_bdevs_discovered": 4, 00:10:19.127 "num_base_bdevs_operational": 4, 00:10:19.127 "base_bdevs_list": [ 00:10:19.127 { 00:10:19.127 "name": "BaseBdev1", 00:10:19.127 "uuid": "3180ef2d-825a-56ba-af72-5226133433f2", 00:10:19.127 "is_configured": true, 00:10:19.127 "data_offset": 2048, 00:10:19.127 "data_size": 63488 00:10:19.127 }, 00:10:19.127 { 00:10:19.127 "name": "BaseBdev2", 00:10:19.127 "uuid": "c2173685-23d1-5be4-825a-f59a214c1c43", 00:10:19.128 "is_configured": true, 00:10:19.128 "data_offset": 2048, 00:10:19.128 "data_size": 63488 00:10:19.128 }, 00:10:19.128 { 00:10:19.128 "name": "BaseBdev3", 00:10:19.128 "uuid": "ed91ddfc-3f14-5685-8fa8-ecdf8be6e76b", 00:10:19.128 "is_configured": true, 00:10:19.128 "data_offset": 2048, 00:10:19.128 "data_size": 63488 00:10:19.128 }, 00:10:19.128 { 00:10:19.128 "name": "BaseBdev4", 00:10:19.128 "uuid": "92be099c-8eec-5efe-a98a-fd700577d85a", 00:10:19.128 "is_configured": true, 00:10:19.128 "data_offset": 2048, 00:10:19.128 "data_size": 63488 00:10:19.128 } 00:10:19.128 ] 00:10:19.128 }' 00:10:19.128 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.128 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.699 [2024-12-07 02:43:30.528485] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.699 [2024-12-07 02:43:30.528606] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.699 [2024-12-07 02:43:30.531201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.699 [2024-12-07 02:43:30.531295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.699 [2024-12-07 02:43:30.531367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.699 [2024-12-07 02:43:30.531429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.699 { 00:10:19.699 "results": [ 00:10:19.699 { 00:10:19.699 "job": "raid_bdev1", 00:10:19.699 "core_mask": "0x1", 00:10:19.699 "workload": "randrw", 00:10:19.699 "percentage": 50, 00:10:19.699 "status": "finished", 00:10:19.699 "queue_depth": 1, 00:10:19.699 "io_size": 131072, 00:10:19.699 "runtime": 1.349159, 00:10:19.699 "iops": 14941.15964093187, 00:10:19.699 "mibps": 1867.6449551164837, 00:10:19.699 "io_failed": 1, 00:10:19.699 "io_timeout": 0, 00:10:19.699 "avg_latency_us": 94.19741387844367, 00:10:19.699 "min_latency_us": 24.258515283842794, 00:10:19.699 "max_latency_us": 1380.8349344978167 00:10:19.699 } 00:10:19.699 ], 00:10:19.699 "core_count": 1 00:10:19.699 } 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82134 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 82134 ']' 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 82134 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82134 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:19.699 killing process with pid 82134 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82134' 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 82134 00:10:19.699 [2024-12-07 02:43:30.576417] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.699 02:43:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 82134 00:10:19.699 [2024-12-07 02:43:30.643678] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.960 02:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6qzXgsOtPl 00:10:19.960 02:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:19.960 02:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:19.960 02:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:19.960 02:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:19.960 ************************************ 00:10:19.960 END TEST raid_read_error_test 00:10:19.960 ************************************ 00:10:19.960 02:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.960 02:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:19.960 02:43:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:19.960 00:10:19.960 real 0m3.538s 00:10:19.960 user 0m4.275s 00:10:19.960 sys 0m0.688s 00:10:19.960 02:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.960 02:43:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.221 02:43:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:20.221 02:43:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:20.221 02:43:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.221 02:43:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:20.221 ************************************ 00:10:20.221 START TEST raid_write_error_test 00:10:20.221 ************************************ 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kcL70qQ380 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=82274 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 82274 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 82274 ']' 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.221 02:43:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.221 [2024-12-07 02:43:31.192371] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:20.221 [2024-12-07 02:43:31.192596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82274 ] 00:10:20.481 [2024-12-07 02:43:31.334504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.481 [2024-12-07 02:43:31.404167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.481 [2024-12-07 02:43:31.480106] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.481 [2024-12-07 02:43:31.480145] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.050 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.050 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:21.050 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.051 BaseBdev1_malloc 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.051 true 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.051 [2024-12-07 02:43:32.057956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:21.051 [2024-12-07 02:43:32.058062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.051 [2024-12-07 02:43:32.058089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:21.051 [2024-12-07 02:43:32.058099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.051 [2024-12-07 02:43:32.060490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.051 [2024-12-07 02:43:32.060527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:21.051 BaseBdev1 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.051 BaseBdev2_malloc 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.051 true 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.051 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.051 [2024-12-07 02:43:32.120751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:21.051 [2024-12-07 02:43:32.120821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.051 [2024-12-07 02:43:32.120853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:21.051 [2024-12-07 02:43:32.120868] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.051 [2024-12-07 02:43:32.124151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.051 [2024-12-07 02:43:32.124235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:21.312 BaseBdev2 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.312 BaseBdev3_malloc 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.312 true 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.312 [2024-12-07 02:43:32.167193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:21.312 [2024-12-07 02:43:32.167277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.312 [2024-12-07 02:43:32.167316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:21.312 [2024-12-07 02:43:32.167325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.312 [2024-12-07 02:43:32.169719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.312 [2024-12-07 02:43:32.169755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:21.312 BaseBdev3 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.312 BaseBdev4_malloc 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.312 true 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.312 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.312 [2024-12-07 02:43:32.214165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:21.312 [2024-12-07 02:43:32.214211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.312 [2024-12-07 02:43:32.214233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:21.312 [2024-12-07 02:43:32.214242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.312 [2024-12-07 02:43:32.216605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.312 [2024-12-07 02:43:32.216670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:21.312 BaseBdev4 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.313 [2024-12-07 02:43:32.226212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.313 [2024-12-07 02:43:32.228393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.313 [2024-12-07 02:43:32.228479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.313 [2024-12-07 02:43:32.228533] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:21.313 [2024-12-07 02:43:32.228748] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:21.313 [2024-12-07 02:43:32.228765] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:21.313 [2024-12-07 02:43:32.229035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:21.313 [2024-12-07 02:43:32.229197] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:21.313 [2024-12-07 02:43:32.229210] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:21.313 [2024-12-07 02:43:32.229336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.313 "name": "raid_bdev1", 00:10:21.313 "uuid": "edbb3c71-26f1-4419-b5ad-48a471ff0178", 00:10:21.313 "strip_size_kb": 64, 00:10:21.313 "state": "online", 00:10:21.313 "raid_level": "raid0", 00:10:21.313 "superblock": true, 00:10:21.313 "num_base_bdevs": 4, 00:10:21.313 "num_base_bdevs_discovered": 4, 00:10:21.313 "num_base_bdevs_operational": 4, 00:10:21.313 "base_bdevs_list": [ 00:10:21.313 { 00:10:21.313 "name": "BaseBdev1", 00:10:21.313 "uuid": "10410aae-2f6a-52ef-916f-65ceaadac870", 00:10:21.313 "is_configured": true, 00:10:21.313 "data_offset": 2048, 00:10:21.313 "data_size": 63488 00:10:21.313 }, 00:10:21.313 { 00:10:21.313 "name": "BaseBdev2", 00:10:21.313 "uuid": "07ca2128-f2df-575f-8aa6-d78e1800ab6d", 00:10:21.313 "is_configured": true, 00:10:21.313 "data_offset": 2048, 00:10:21.313 "data_size": 63488 00:10:21.313 }, 00:10:21.313 { 00:10:21.313 "name": "BaseBdev3", 00:10:21.313 "uuid": "f90d2cc2-8185-5aab-908f-fcf83e101a68", 00:10:21.313 "is_configured": true, 00:10:21.313 "data_offset": 2048, 00:10:21.313 "data_size": 63488 00:10:21.313 }, 00:10:21.313 { 00:10:21.313 "name": "BaseBdev4", 00:10:21.313 "uuid": "eed70824-5b5c-51b1-9944-e0a2a563d9ba", 00:10:21.313 "is_configured": true, 00:10:21.313 "data_offset": 2048, 00:10:21.313 "data_size": 63488 00:10:21.313 } 00:10:21.313 ] 00:10:21.313 }' 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.313 02:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.573 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:21.573 02:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:21.834 [2024-12-07 02:43:32.741763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.774 "name": "raid_bdev1", 00:10:22.774 "uuid": "edbb3c71-26f1-4419-b5ad-48a471ff0178", 00:10:22.774 "strip_size_kb": 64, 00:10:22.774 "state": "online", 00:10:22.774 "raid_level": "raid0", 00:10:22.774 "superblock": true, 00:10:22.774 "num_base_bdevs": 4, 00:10:22.774 "num_base_bdevs_discovered": 4, 00:10:22.774 "num_base_bdevs_operational": 4, 00:10:22.774 "base_bdevs_list": [ 00:10:22.774 { 00:10:22.774 "name": "BaseBdev1", 00:10:22.774 "uuid": "10410aae-2f6a-52ef-916f-65ceaadac870", 00:10:22.774 "is_configured": true, 00:10:22.774 "data_offset": 2048, 00:10:22.774 "data_size": 63488 00:10:22.774 }, 00:10:22.774 { 00:10:22.774 "name": "BaseBdev2", 00:10:22.774 "uuid": "07ca2128-f2df-575f-8aa6-d78e1800ab6d", 00:10:22.774 "is_configured": true, 00:10:22.774 "data_offset": 2048, 00:10:22.774 "data_size": 63488 00:10:22.774 }, 00:10:22.774 { 00:10:22.774 "name": "BaseBdev3", 00:10:22.774 "uuid": "f90d2cc2-8185-5aab-908f-fcf83e101a68", 00:10:22.774 "is_configured": true, 00:10:22.774 "data_offset": 2048, 00:10:22.774 "data_size": 63488 00:10:22.774 }, 00:10:22.774 { 00:10:22.774 "name": "BaseBdev4", 00:10:22.774 "uuid": "eed70824-5b5c-51b1-9944-e0a2a563d9ba", 00:10:22.774 "is_configured": true, 00:10:22.774 "data_offset": 2048, 00:10:22.774 "data_size": 63488 00:10:22.774 } 00:10:22.774 ] 00:10:22.774 }' 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.774 02:43:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.034 02:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:23.034 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.034 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.034 [2024-12-07 02:43:34.070281] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.034 [2024-12-07 02:43:34.070379] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.034 [2024-12-07 02:43:34.073014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.034 [2024-12-07 02:43:34.073107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.034 [2024-12-07 02:43:34.073175] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.034 [2024-12-07 02:43:34.073220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:23.034 { 00:10:23.034 "results": [ 00:10:23.034 { 00:10:23.034 "job": "raid_bdev1", 00:10:23.034 "core_mask": "0x1", 00:10:23.034 "workload": "randrw", 00:10:23.034 "percentage": 50, 00:10:23.034 "status": "finished", 00:10:23.034 "queue_depth": 1, 00:10:23.034 "io_size": 131072, 00:10:23.034 "runtime": 1.329059, 00:10:23.034 "iops": 14715.674774408059, 00:10:23.034 "mibps": 1839.4593468010073, 00:10:23.034 "io_failed": 1, 00:10:23.034 "io_timeout": 0, 00:10:23.034 "avg_latency_us": 95.603816824741, 00:10:23.034 "min_latency_us": 24.705676855895195, 00:10:23.034 "max_latency_us": 1323.598253275109 00:10:23.034 } 00:10:23.034 ], 00:10:23.034 "core_count": 1 00:10:23.034 } 00:10:23.034 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.035 02:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 82274 00:10:23.035 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 82274 ']' 00:10:23.035 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 82274 00:10:23.035 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:23.035 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.035 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82274 00:10:23.295 killing process with pid 82274 00:10:23.295 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:23.295 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:23.295 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82274' 00:10:23.295 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 82274 00:10:23.295 [2024-12-07 02:43:34.115796] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.295 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 82274 00:10:23.295 [2024-12-07 02:43:34.181738] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.555 02:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kcL70qQ380 00:10:23.555 02:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:23.555 02:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:23.555 02:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:10:23.555 02:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:23.555 02:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.555 02:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:23.555 02:43:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:10:23.555 00:10:23.555 real 0m3.472s 00:10:23.555 user 0m4.174s 00:10:23.555 sys 0m0.652s 00:10:23.555 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.555 02:43:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.555 ************************************ 00:10:23.555 END TEST raid_write_error_test 00:10:23.555 ************************************ 00:10:23.555 02:43:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:23.555 02:43:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:23.555 02:43:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:23.555 02:43:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.555 02:43:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.815 ************************************ 00:10:23.815 START TEST raid_state_function_test 00:10:23.815 ************************************ 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82401 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82401' 00:10:23.815 Process raid pid: 82401 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82401 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82401 ']' 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.815 02:43:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.815 [2024-12-07 02:43:34.729625] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:23.815 [2024-12-07 02:43:34.729824] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.815 [2024-12-07 02:43:34.889793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.075 [2024-12-07 02:43:34.959560] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.075 [2024-12-07 02:43:35.036263] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.075 [2024-12-07 02:43:35.036384] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.644 [2024-12-07 02:43:35.567658] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.644 [2024-12-07 02:43:35.567712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.644 [2024-12-07 02:43:35.567748] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.644 [2024-12-07 02:43:35.567759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.644 [2024-12-07 02:43:35.567765] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.644 [2024-12-07 02:43:35.567777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.644 [2024-12-07 02:43:35.567783] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:24.644 [2024-12-07 02:43:35.567794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.644 "name": "Existed_Raid", 00:10:24.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.644 "strip_size_kb": 64, 00:10:24.644 "state": "configuring", 00:10:24.644 "raid_level": "concat", 00:10:24.644 "superblock": false, 00:10:24.644 "num_base_bdevs": 4, 00:10:24.644 "num_base_bdevs_discovered": 0, 00:10:24.644 "num_base_bdevs_operational": 4, 00:10:24.644 "base_bdevs_list": [ 00:10:24.644 { 00:10:24.644 "name": "BaseBdev1", 00:10:24.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.644 "is_configured": false, 00:10:24.644 "data_offset": 0, 00:10:24.644 "data_size": 0 00:10:24.644 }, 00:10:24.644 { 00:10:24.644 "name": "BaseBdev2", 00:10:24.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.644 "is_configured": false, 00:10:24.644 "data_offset": 0, 00:10:24.644 "data_size": 0 00:10:24.644 }, 00:10:24.644 { 00:10:24.644 "name": "BaseBdev3", 00:10:24.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.644 "is_configured": false, 00:10:24.644 "data_offset": 0, 00:10:24.644 "data_size": 0 00:10:24.644 }, 00:10:24.644 { 00:10:24.644 "name": "BaseBdev4", 00:10:24.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.644 "is_configured": false, 00:10:24.644 "data_offset": 0, 00:10:24.644 "data_size": 0 00:10:24.644 } 00:10:24.644 ] 00:10:24.644 }' 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.644 02:43:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.214 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 [2024-12-07 02:43:36.046715] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.215 [2024-12-07 02:43:36.046797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 [2024-12-07 02:43:36.054757] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.215 [2024-12-07 02:43:36.054833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.215 [2024-12-07 02:43:36.054864] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.215 [2024-12-07 02:43:36.054888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.215 [2024-12-07 02:43:36.054959] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.215 [2024-12-07 02:43:36.054984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.215 [2024-12-07 02:43:36.055002] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.215 [2024-12-07 02:43:36.055055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 [2024-12-07 02:43:36.077627] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.215 BaseBdev1 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 [ 00:10:25.215 { 00:10:25.215 "name": "BaseBdev1", 00:10:25.215 "aliases": [ 00:10:25.215 "323178f2-636a-48a4-ba65-b9f132459102" 00:10:25.215 ], 00:10:25.215 "product_name": "Malloc disk", 00:10:25.215 "block_size": 512, 00:10:25.215 "num_blocks": 65536, 00:10:25.215 "uuid": "323178f2-636a-48a4-ba65-b9f132459102", 00:10:25.215 "assigned_rate_limits": { 00:10:25.215 "rw_ios_per_sec": 0, 00:10:25.215 "rw_mbytes_per_sec": 0, 00:10:25.215 "r_mbytes_per_sec": 0, 00:10:25.215 "w_mbytes_per_sec": 0 00:10:25.215 }, 00:10:25.215 "claimed": true, 00:10:25.215 "claim_type": "exclusive_write", 00:10:25.215 "zoned": false, 00:10:25.215 "supported_io_types": { 00:10:25.215 "read": true, 00:10:25.215 "write": true, 00:10:25.215 "unmap": true, 00:10:25.215 "flush": true, 00:10:25.215 "reset": true, 00:10:25.215 "nvme_admin": false, 00:10:25.215 "nvme_io": false, 00:10:25.215 "nvme_io_md": false, 00:10:25.215 "write_zeroes": true, 00:10:25.215 "zcopy": true, 00:10:25.215 "get_zone_info": false, 00:10:25.215 "zone_management": false, 00:10:25.215 "zone_append": false, 00:10:25.215 "compare": false, 00:10:25.215 "compare_and_write": false, 00:10:25.215 "abort": true, 00:10:25.215 "seek_hole": false, 00:10:25.215 "seek_data": false, 00:10:25.215 "copy": true, 00:10:25.215 "nvme_iov_md": false 00:10:25.215 }, 00:10:25.215 "memory_domains": [ 00:10:25.215 { 00:10:25.215 "dma_device_id": "system", 00:10:25.215 "dma_device_type": 1 00:10:25.215 }, 00:10:25.215 { 00:10:25.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.215 "dma_device_type": 2 00:10:25.215 } 00:10:25.215 ], 00:10:25.215 "driver_specific": {} 00:10:25.215 } 00:10:25.215 ] 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.215 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.215 "name": "Existed_Raid", 00:10:25.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.215 "strip_size_kb": 64, 00:10:25.215 "state": "configuring", 00:10:25.215 "raid_level": "concat", 00:10:25.215 "superblock": false, 00:10:25.215 "num_base_bdevs": 4, 00:10:25.215 "num_base_bdevs_discovered": 1, 00:10:25.215 "num_base_bdevs_operational": 4, 00:10:25.215 "base_bdevs_list": [ 00:10:25.216 { 00:10:25.216 "name": "BaseBdev1", 00:10:25.216 "uuid": "323178f2-636a-48a4-ba65-b9f132459102", 00:10:25.216 "is_configured": true, 00:10:25.216 "data_offset": 0, 00:10:25.216 "data_size": 65536 00:10:25.216 }, 00:10:25.216 { 00:10:25.216 "name": "BaseBdev2", 00:10:25.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.216 "is_configured": false, 00:10:25.216 "data_offset": 0, 00:10:25.216 "data_size": 0 00:10:25.216 }, 00:10:25.216 { 00:10:25.216 "name": "BaseBdev3", 00:10:25.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.216 "is_configured": false, 00:10:25.216 "data_offset": 0, 00:10:25.216 "data_size": 0 00:10:25.216 }, 00:10:25.216 { 00:10:25.216 "name": "BaseBdev4", 00:10:25.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.216 "is_configured": false, 00:10:25.216 "data_offset": 0, 00:10:25.216 "data_size": 0 00:10:25.216 } 00:10:25.216 ] 00:10:25.216 }' 00:10:25.216 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.216 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.786 [2024-12-07 02:43:36.568877] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.786 [2024-12-07 02:43:36.568960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.786 [2024-12-07 02:43:36.576856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.786 [2024-12-07 02:43:36.578974] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.786 [2024-12-07 02:43:36.579010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.786 [2024-12-07 02:43:36.579019] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.786 [2024-12-07 02:43:36.579044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.786 [2024-12-07 02:43:36.579050] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.786 [2024-12-07 02:43:36.579058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.786 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.787 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.787 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.787 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.787 "name": "Existed_Raid", 00:10:25.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.787 "strip_size_kb": 64, 00:10:25.787 "state": "configuring", 00:10:25.787 "raid_level": "concat", 00:10:25.787 "superblock": false, 00:10:25.787 "num_base_bdevs": 4, 00:10:25.787 "num_base_bdevs_discovered": 1, 00:10:25.787 "num_base_bdevs_operational": 4, 00:10:25.787 "base_bdevs_list": [ 00:10:25.787 { 00:10:25.787 "name": "BaseBdev1", 00:10:25.787 "uuid": "323178f2-636a-48a4-ba65-b9f132459102", 00:10:25.787 "is_configured": true, 00:10:25.787 "data_offset": 0, 00:10:25.787 "data_size": 65536 00:10:25.787 }, 00:10:25.787 { 00:10:25.787 "name": "BaseBdev2", 00:10:25.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.787 "is_configured": false, 00:10:25.787 "data_offset": 0, 00:10:25.787 "data_size": 0 00:10:25.787 }, 00:10:25.787 { 00:10:25.787 "name": "BaseBdev3", 00:10:25.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.787 "is_configured": false, 00:10:25.787 "data_offset": 0, 00:10:25.787 "data_size": 0 00:10:25.787 }, 00:10:25.787 { 00:10:25.787 "name": "BaseBdev4", 00:10:25.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.787 "is_configured": false, 00:10:25.787 "data_offset": 0, 00:10:25.787 "data_size": 0 00:10:25.787 } 00:10:25.787 ] 00:10:25.787 }' 00:10:25.787 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.787 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.047 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:26.047 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.047 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.047 [2024-12-07 02:43:36.989997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.047 BaseBdev2 00:10:26.047 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.047 02:43:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:26.047 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:26.047 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.048 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:26.048 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.048 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.048 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.048 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.048 02:43:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.048 [ 00:10:26.048 { 00:10:26.048 "name": "BaseBdev2", 00:10:26.048 "aliases": [ 00:10:26.048 "95ade3d8-c518-4a2a-a0ef-acf7bc5feec2" 00:10:26.048 ], 00:10:26.048 "product_name": "Malloc disk", 00:10:26.048 "block_size": 512, 00:10:26.048 "num_blocks": 65536, 00:10:26.048 "uuid": "95ade3d8-c518-4a2a-a0ef-acf7bc5feec2", 00:10:26.048 "assigned_rate_limits": { 00:10:26.048 "rw_ios_per_sec": 0, 00:10:26.048 "rw_mbytes_per_sec": 0, 00:10:26.048 "r_mbytes_per_sec": 0, 00:10:26.048 "w_mbytes_per_sec": 0 00:10:26.048 }, 00:10:26.048 "claimed": true, 00:10:26.048 "claim_type": "exclusive_write", 00:10:26.048 "zoned": false, 00:10:26.048 "supported_io_types": { 00:10:26.048 "read": true, 00:10:26.048 "write": true, 00:10:26.048 "unmap": true, 00:10:26.048 "flush": true, 00:10:26.048 "reset": true, 00:10:26.048 "nvme_admin": false, 00:10:26.048 "nvme_io": false, 00:10:26.048 "nvme_io_md": false, 00:10:26.048 "write_zeroes": true, 00:10:26.048 "zcopy": true, 00:10:26.048 "get_zone_info": false, 00:10:26.048 "zone_management": false, 00:10:26.048 "zone_append": false, 00:10:26.048 "compare": false, 00:10:26.048 "compare_and_write": false, 00:10:26.048 "abort": true, 00:10:26.048 "seek_hole": false, 00:10:26.048 "seek_data": false, 00:10:26.048 "copy": true, 00:10:26.048 "nvme_iov_md": false 00:10:26.048 }, 00:10:26.048 "memory_domains": [ 00:10:26.048 { 00:10:26.048 "dma_device_id": "system", 00:10:26.048 "dma_device_type": 1 00:10:26.048 }, 00:10:26.048 { 00:10:26.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.048 "dma_device_type": 2 00:10:26.048 } 00:10:26.048 ], 00:10:26.048 "driver_specific": {} 00:10:26.048 } 00:10:26.048 ] 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.048 "name": "Existed_Raid", 00:10:26.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.048 "strip_size_kb": 64, 00:10:26.048 "state": "configuring", 00:10:26.048 "raid_level": "concat", 00:10:26.048 "superblock": false, 00:10:26.048 "num_base_bdevs": 4, 00:10:26.048 "num_base_bdevs_discovered": 2, 00:10:26.048 "num_base_bdevs_operational": 4, 00:10:26.048 "base_bdevs_list": [ 00:10:26.048 { 00:10:26.048 "name": "BaseBdev1", 00:10:26.048 "uuid": "323178f2-636a-48a4-ba65-b9f132459102", 00:10:26.048 "is_configured": true, 00:10:26.048 "data_offset": 0, 00:10:26.048 "data_size": 65536 00:10:26.048 }, 00:10:26.048 { 00:10:26.048 "name": "BaseBdev2", 00:10:26.048 "uuid": "95ade3d8-c518-4a2a-a0ef-acf7bc5feec2", 00:10:26.048 "is_configured": true, 00:10:26.048 "data_offset": 0, 00:10:26.048 "data_size": 65536 00:10:26.048 }, 00:10:26.048 { 00:10:26.048 "name": "BaseBdev3", 00:10:26.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.048 "is_configured": false, 00:10:26.048 "data_offset": 0, 00:10:26.048 "data_size": 0 00:10:26.048 }, 00:10:26.048 { 00:10:26.048 "name": "BaseBdev4", 00:10:26.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.048 "is_configured": false, 00:10:26.048 "data_offset": 0, 00:10:26.048 "data_size": 0 00:10:26.048 } 00:10:26.048 ] 00:10:26.048 }' 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.048 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.619 [2024-12-07 02:43:37.517822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.619 BaseBdev3 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.619 [ 00:10:26.619 { 00:10:26.619 "name": "BaseBdev3", 00:10:26.619 "aliases": [ 00:10:26.619 "9f67a2fb-4372-4b3e-bd1b-a801fe20db51" 00:10:26.619 ], 00:10:26.619 "product_name": "Malloc disk", 00:10:26.619 "block_size": 512, 00:10:26.619 "num_blocks": 65536, 00:10:26.619 "uuid": "9f67a2fb-4372-4b3e-bd1b-a801fe20db51", 00:10:26.619 "assigned_rate_limits": { 00:10:26.619 "rw_ios_per_sec": 0, 00:10:26.619 "rw_mbytes_per_sec": 0, 00:10:26.619 "r_mbytes_per_sec": 0, 00:10:26.619 "w_mbytes_per_sec": 0 00:10:26.619 }, 00:10:26.619 "claimed": true, 00:10:26.619 "claim_type": "exclusive_write", 00:10:26.619 "zoned": false, 00:10:26.619 "supported_io_types": { 00:10:26.619 "read": true, 00:10:26.619 "write": true, 00:10:26.619 "unmap": true, 00:10:26.619 "flush": true, 00:10:26.619 "reset": true, 00:10:26.619 "nvme_admin": false, 00:10:26.619 "nvme_io": false, 00:10:26.619 "nvme_io_md": false, 00:10:26.619 "write_zeroes": true, 00:10:26.619 "zcopy": true, 00:10:26.619 "get_zone_info": false, 00:10:26.619 "zone_management": false, 00:10:26.619 "zone_append": false, 00:10:26.619 "compare": false, 00:10:26.619 "compare_and_write": false, 00:10:26.619 "abort": true, 00:10:26.619 "seek_hole": false, 00:10:26.619 "seek_data": false, 00:10:26.619 "copy": true, 00:10:26.619 "nvme_iov_md": false 00:10:26.619 }, 00:10:26.619 "memory_domains": [ 00:10:26.619 { 00:10:26.619 "dma_device_id": "system", 00:10:26.619 "dma_device_type": 1 00:10:26.619 }, 00:10:26.619 { 00:10:26.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.619 "dma_device_type": 2 00:10:26.619 } 00:10:26.619 ], 00:10:26.619 "driver_specific": {} 00:10:26.619 } 00:10:26.619 ] 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.619 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.619 "name": "Existed_Raid", 00:10:26.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.619 "strip_size_kb": 64, 00:10:26.619 "state": "configuring", 00:10:26.619 "raid_level": "concat", 00:10:26.619 "superblock": false, 00:10:26.619 "num_base_bdevs": 4, 00:10:26.619 "num_base_bdevs_discovered": 3, 00:10:26.619 "num_base_bdevs_operational": 4, 00:10:26.619 "base_bdevs_list": [ 00:10:26.619 { 00:10:26.619 "name": "BaseBdev1", 00:10:26.619 "uuid": "323178f2-636a-48a4-ba65-b9f132459102", 00:10:26.619 "is_configured": true, 00:10:26.619 "data_offset": 0, 00:10:26.619 "data_size": 65536 00:10:26.619 }, 00:10:26.619 { 00:10:26.619 "name": "BaseBdev2", 00:10:26.619 "uuid": "95ade3d8-c518-4a2a-a0ef-acf7bc5feec2", 00:10:26.619 "is_configured": true, 00:10:26.619 "data_offset": 0, 00:10:26.619 "data_size": 65536 00:10:26.619 }, 00:10:26.619 { 00:10:26.619 "name": "BaseBdev3", 00:10:26.619 "uuid": "9f67a2fb-4372-4b3e-bd1b-a801fe20db51", 00:10:26.619 "is_configured": true, 00:10:26.619 "data_offset": 0, 00:10:26.620 "data_size": 65536 00:10:26.620 }, 00:10:26.620 { 00:10:26.620 "name": "BaseBdev4", 00:10:26.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.620 "is_configured": false, 00:10:26.620 "data_offset": 0, 00:10:26.620 "data_size": 0 00:10:26.620 } 00:10:26.620 ] 00:10:26.620 }' 00:10:26.620 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.620 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.189 02:43:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:27.189 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.189 02:43:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.189 [2024-12-07 02:43:38.013856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:27.190 [2024-12-07 02:43:38.013910] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:27.190 [2024-12-07 02:43:38.013919] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:27.190 [2024-12-07 02:43:38.014240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:27.190 [2024-12-07 02:43:38.014388] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:27.190 [2024-12-07 02:43:38.014401] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:27.190 [2024-12-07 02:43:38.014656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.190 BaseBdev4 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.190 [ 00:10:27.190 { 00:10:27.190 "name": "BaseBdev4", 00:10:27.190 "aliases": [ 00:10:27.190 "1281e65a-711c-4310-bc4b-26a737b56277" 00:10:27.190 ], 00:10:27.190 "product_name": "Malloc disk", 00:10:27.190 "block_size": 512, 00:10:27.190 "num_blocks": 65536, 00:10:27.190 "uuid": "1281e65a-711c-4310-bc4b-26a737b56277", 00:10:27.190 "assigned_rate_limits": { 00:10:27.190 "rw_ios_per_sec": 0, 00:10:27.190 "rw_mbytes_per_sec": 0, 00:10:27.190 "r_mbytes_per_sec": 0, 00:10:27.190 "w_mbytes_per_sec": 0 00:10:27.190 }, 00:10:27.190 "claimed": true, 00:10:27.190 "claim_type": "exclusive_write", 00:10:27.190 "zoned": false, 00:10:27.190 "supported_io_types": { 00:10:27.190 "read": true, 00:10:27.190 "write": true, 00:10:27.190 "unmap": true, 00:10:27.190 "flush": true, 00:10:27.190 "reset": true, 00:10:27.190 "nvme_admin": false, 00:10:27.190 "nvme_io": false, 00:10:27.190 "nvme_io_md": false, 00:10:27.190 "write_zeroes": true, 00:10:27.190 "zcopy": true, 00:10:27.190 "get_zone_info": false, 00:10:27.190 "zone_management": false, 00:10:27.190 "zone_append": false, 00:10:27.190 "compare": false, 00:10:27.190 "compare_and_write": false, 00:10:27.190 "abort": true, 00:10:27.190 "seek_hole": false, 00:10:27.190 "seek_data": false, 00:10:27.190 "copy": true, 00:10:27.190 "nvme_iov_md": false 00:10:27.190 }, 00:10:27.190 "memory_domains": [ 00:10:27.190 { 00:10:27.190 "dma_device_id": "system", 00:10:27.190 "dma_device_type": 1 00:10:27.190 }, 00:10:27.190 { 00:10:27.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.190 "dma_device_type": 2 00:10:27.190 } 00:10:27.190 ], 00:10:27.190 "driver_specific": {} 00:10:27.190 } 00:10:27.190 ] 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.190 "name": "Existed_Raid", 00:10:27.190 "uuid": "f76c7b1d-51b8-482a-8221-b9336b310f0d", 00:10:27.190 "strip_size_kb": 64, 00:10:27.190 "state": "online", 00:10:27.190 "raid_level": "concat", 00:10:27.190 "superblock": false, 00:10:27.190 "num_base_bdevs": 4, 00:10:27.190 "num_base_bdevs_discovered": 4, 00:10:27.190 "num_base_bdevs_operational": 4, 00:10:27.190 "base_bdevs_list": [ 00:10:27.190 { 00:10:27.190 "name": "BaseBdev1", 00:10:27.190 "uuid": "323178f2-636a-48a4-ba65-b9f132459102", 00:10:27.190 "is_configured": true, 00:10:27.190 "data_offset": 0, 00:10:27.190 "data_size": 65536 00:10:27.190 }, 00:10:27.190 { 00:10:27.190 "name": "BaseBdev2", 00:10:27.190 "uuid": "95ade3d8-c518-4a2a-a0ef-acf7bc5feec2", 00:10:27.190 "is_configured": true, 00:10:27.190 "data_offset": 0, 00:10:27.190 "data_size": 65536 00:10:27.190 }, 00:10:27.190 { 00:10:27.190 "name": "BaseBdev3", 00:10:27.190 "uuid": "9f67a2fb-4372-4b3e-bd1b-a801fe20db51", 00:10:27.190 "is_configured": true, 00:10:27.190 "data_offset": 0, 00:10:27.190 "data_size": 65536 00:10:27.190 }, 00:10:27.190 { 00:10:27.190 "name": "BaseBdev4", 00:10:27.190 "uuid": "1281e65a-711c-4310-bc4b-26a737b56277", 00:10:27.190 "is_configured": true, 00:10:27.190 "data_offset": 0, 00:10:27.190 "data_size": 65536 00:10:27.190 } 00:10:27.190 ] 00:10:27.190 }' 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.190 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.449 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:27.449 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:27.449 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.449 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.449 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.449 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.449 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:27.449 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.449 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.449 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.449 [2024-12-07 02:43:38.493367] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.449 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.708 "name": "Existed_Raid", 00:10:27.708 "aliases": [ 00:10:27.708 "f76c7b1d-51b8-482a-8221-b9336b310f0d" 00:10:27.708 ], 00:10:27.708 "product_name": "Raid Volume", 00:10:27.708 "block_size": 512, 00:10:27.708 "num_blocks": 262144, 00:10:27.708 "uuid": "f76c7b1d-51b8-482a-8221-b9336b310f0d", 00:10:27.708 "assigned_rate_limits": { 00:10:27.708 "rw_ios_per_sec": 0, 00:10:27.708 "rw_mbytes_per_sec": 0, 00:10:27.708 "r_mbytes_per_sec": 0, 00:10:27.708 "w_mbytes_per_sec": 0 00:10:27.708 }, 00:10:27.708 "claimed": false, 00:10:27.708 "zoned": false, 00:10:27.708 "supported_io_types": { 00:10:27.708 "read": true, 00:10:27.708 "write": true, 00:10:27.708 "unmap": true, 00:10:27.708 "flush": true, 00:10:27.708 "reset": true, 00:10:27.708 "nvme_admin": false, 00:10:27.708 "nvme_io": false, 00:10:27.708 "nvme_io_md": false, 00:10:27.708 "write_zeroes": true, 00:10:27.708 "zcopy": false, 00:10:27.708 "get_zone_info": false, 00:10:27.708 "zone_management": false, 00:10:27.708 "zone_append": false, 00:10:27.708 "compare": false, 00:10:27.708 "compare_and_write": false, 00:10:27.708 "abort": false, 00:10:27.708 "seek_hole": false, 00:10:27.708 "seek_data": false, 00:10:27.708 "copy": false, 00:10:27.708 "nvme_iov_md": false 00:10:27.708 }, 00:10:27.708 "memory_domains": [ 00:10:27.708 { 00:10:27.708 "dma_device_id": "system", 00:10:27.708 "dma_device_type": 1 00:10:27.708 }, 00:10:27.708 { 00:10:27.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.708 "dma_device_type": 2 00:10:27.708 }, 00:10:27.708 { 00:10:27.708 "dma_device_id": "system", 00:10:27.708 "dma_device_type": 1 00:10:27.708 }, 00:10:27.708 { 00:10:27.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.708 "dma_device_type": 2 00:10:27.708 }, 00:10:27.708 { 00:10:27.708 "dma_device_id": "system", 00:10:27.708 "dma_device_type": 1 00:10:27.708 }, 00:10:27.708 { 00:10:27.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.708 "dma_device_type": 2 00:10:27.708 }, 00:10:27.708 { 00:10:27.708 "dma_device_id": "system", 00:10:27.708 "dma_device_type": 1 00:10:27.708 }, 00:10:27.708 { 00:10:27.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.708 "dma_device_type": 2 00:10:27.708 } 00:10:27.708 ], 00:10:27.708 "driver_specific": { 00:10:27.708 "raid": { 00:10:27.708 "uuid": "f76c7b1d-51b8-482a-8221-b9336b310f0d", 00:10:27.708 "strip_size_kb": 64, 00:10:27.708 "state": "online", 00:10:27.708 "raid_level": "concat", 00:10:27.708 "superblock": false, 00:10:27.708 "num_base_bdevs": 4, 00:10:27.708 "num_base_bdevs_discovered": 4, 00:10:27.708 "num_base_bdevs_operational": 4, 00:10:27.708 "base_bdevs_list": [ 00:10:27.708 { 00:10:27.708 "name": "BaseBdev1", 00:10:27.708 "uuid": "323178f2-636a-48a4-ba65-b9f132459102", 00:10:27.708 "is_configured": true, 00:10:27.708 "data_offset": 0, 00:10:27.708 "data_size": 65536 00:10:27.708 }, 00:10:27.708 { 00:10:27.708 "name": "BaseBdev2", 00:10:27.708 "uuid": "95ade3d8-c518-4a2a-a0ef-acf7bc5feec2", 00:10:27.708 "is_configured": true, 00:10:27.708 "data_offset": 0, 00:10:27.708 "data_size": 65536 00:10:27.708 }, 00:10:27.708 { 00:10:27.708 "name": "BaseBdev3", 00:10:27.708 "uuid": "9f67a2fb-4372-4b3e-bd1b-a801fe20db51", 00:10:27.708 "is_configured": true, 00:10:27.708 "data_offset": 0, 00:10:27.708 "data_size": 65536 00:10:27.708 }, 00:10:27.708 { 00:10:27.708 "name": "BaseBdev4", 00:10:27.708 "uuid": "1281e65a-711c-4310-bc4b-26a737b56277", 00:10:27.708 "is_configured": true, 00:10:27.708 "data_offset": 0, 00:10:27.708 "data_size": 65536 00:10:27.708 } 00:10:27.708 ] 00:10:27.708 } 00:10:27.708 } 00:10:27.708 }' 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:27.708 BaseBdev2 00:10:27.708 BaseBdev3 00:10:27.708 BaseBdev4' 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.708 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.967 [2024-12-07 02:43:38.832515] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.967 [2024-12-07 02:43:38.832548] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.967 [2024-12-07 02:43:38.832620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.967 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.967 "name": "Existed_Raid", 00:10:27.967 "uuid": "f76c7b1d-51b8-482a-8221-b9336b310f0d", 00:10:27.967 "strip_size_kb": 64, 00:10:27.967 "state": "offline", 00:10:27.967 "raid_level": "concat", 00:10:27.967 "superblock": false, 00:10:27.967 "num_base_bdevs": 4, 00:10:27.967 "num_base_bdevs_discovered": 3, 00:10:27.967 "num_base_bdevs_operational": 3, 00:10:27.967 "base_bdevs_list": [ 00:10:27.967 { 00:10:27.967 "name": null, 00:10:27.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.967 "is_configured": false, 00:10:27.967 "data_offset": 0, 00:10:27.967 "data_size": 65536 00:10:27.967 }, 00:10:27.967 { 00:10:27.967 "name": "BaseBdev2", 00:10:27.967 "uuid": "95ade3d8-c518-4a2a-a0ef-acf7bc5feec2", 00:10:27.967 "is_configured": true, 00:10:27.967 "data_offset": 0, 00:10:27.967 "data_size": 65536 00:10:27.967 }, 00:10:27.967 { 00:10:27.967 "name": "BaseBdev3", 00:10:27.967 "uuid": "9f67a2fb-4372-4b3e-bd1b-a801fe20db51", 00:10:27.967 "is_configured": true, 00:10:27.967 "data_offset": 0, 00:10:27.967 "data_size": 65536 00:10:27.967 }, 00:10:27.967 { 00:10:27.968 "name": "BaseBdev4", 00:10:27.968 "uuid": "1281e65a-711c-4310-bc4b-26a737b56277", 00:10:27.968 "is_configured": true, 00:10:27.968 "data_offset": 0, 00:10:27.968 "data_size": 65536 00:10:27.968 } 00:10:27.968 ] 00:10:27.968 }' 00:10:27.968 02:43:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.968 02:43:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.226 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:28.226 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.226 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.226 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.226 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.226 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.227 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 [2024-12-07 02:43:39.312790] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 [2024-12-07 02:43:39.393176] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 [2024-12-07 02:43:39.473436] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:28.486 [2024-12-07 02:43:39.473555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.486 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.747 BaseBdev2 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.747 [ 00:10:28.747 { 00:10:28.747 "name": "BaseBdev2", 00:10:28.747 "aliases": [ 00:10:28.747 "48432e93-e8e4-4d01-9c65-5d2a99e15679" 00:10:28.747 ], 00:10:28.747 "product_name": "Malloc disk", 00:10:28.747 "block_size": 512, 00:10:28.747 "num_blocks": 65536, 00:10:28.747 "uuid": "48432e93-e8e4-4d01-9c65-5d2a99e15679", 00:10:28.747 "assigned_rate_limits": { 00:10:28.747 "rw_ios_per_sec": 0, 00:10:28.747 "rw_mbytes_per_sec": 0, 00:10:28.747 "r_mbytes_per_sec": 0, 00:10:28.747 "w_mbytes_per_sec": 0 00:10:28.747 }, 00:10:28.747 "claimed": false, 00:10:28.747 "zoned": false, 00:10:28.747 "supported_io_types": { 00:10:28.747 "read": true, 00:10:28.747 "write": true, 00:10:28.747 "unmap": true, 00:10:28.747 "flush": true, 00:10:28.747 "reset": true, 00:10:28.747 "nvme_admin": false, 00:10:28.747 "nvme_io": false, 00:10:28.747 "nvme_io_md": false, 00:10:28.747 "write_zeroes": true, 00:10:28.747 "zcopy": true, 00:10:28.747 "get_zone_info": false, 00:10:28.747 "zone_management": false, 00:10:28.747 "zone_append": false, 00:10:28.747 "compare": false, 00:10:28.747 "compare_and_write": false, 00:10:28.747 "abort": true, 00:10:28.747 "seek_hole": false, 00:10:28.747 "seek_data": false, 00:10:28.747 "copy": true, 00:10:28.747 "nvme_iov_md": false 00:10:28.747 }, 00:10:28.747 "memory_domains": [ 00:10:28.747 { 00:10:28.747 "dma_device_id": "system", 00:10:28.747 "dma_device_type": 1 00:10:28.747 }, 00:10:28.747 { 00:10:28.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.747 "dma_device_type": 2 00:10:28.747 } 00:10:28.747 ], 00:10:28.747 "driver_specific": {} 00:10:28.747 } 00:10:28.747 ] 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.747 BaseBdev3 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.747 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.747 [ 00:10:28.747 { 00:10:28.747 "name": "BaseBdev3", 00:10:28.747 "aliases": [ 00:10:28.747 "a89c4601-c796-40ba-920a-47d7c499ce62" 00:10:28.747 ], 00:10:28.747 "product_name": "Malloc disk", 00:10:28.747 "block_size": 512, 00:10:28.747 "num_blocks": 65536, 00:10:28.747 "uuid": "a89c4601-c796-40ba-920a-47d7c499ce62", 00:10:28.747 "assigned_rate_limits": { 00:10:28.747 "rw_ios_per_sec": 0, 00:10:28.747 "rw_mbytes_per_sec": 0, 00:10:28.747 "r_mbytes_per_sec": 0, 00:10:28.747 "w_mbytes_per_sec": 0 00:10:28.747 }, 00:10:28.747 "claimed": false, 00:10:28.747 "zoned": false, 00:10:28.747 "supported_io_types": { 00:10:28.747 "read": true, 00:10:28.747 "write": true, 00:10:28.748 "unmap": true, 00:10:28.748 "flush": true, 00:10:28.748 "reset": true, 00:10:28.748 "nvme_admin": false, 00:10:28.748 "nvme_io": false, 00:10:28.748 "nvme_io_md": false, 00:10:28.748 "write_zeroes": true, 00:10:28.748 "zcopy": true, 00:10:28.748 "get_zone_info": false, 00:10:28.748 "zone_management": false, 00:10:28.748 "zone_append": false, 00:10:28.748 "compare": false, 00:10:28.748 "compare_and_write": false, 00:10:28.748 "abort": true, 00:10:28.748 "seek_hole": false, 00:10:28.748 "seek_data": false, 00:10:28.748 "copy": true, 00:10:28.748 "nvme_iov_md": false 00:10:28.748 }, 00:10:28.748 "memory_domains": [ 00:10:28.748 { 00:10:28.748 "dma_device_id": "system", 00:10:28.748 "dma_device_type": 1 00:10:28.748 }, 00:10:28.748 { 00:10:28.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.748 "dma_device_type": 2 00:10:28.748 } 00:10:28.748 ], 00:10:28.748 "driver_specific": {} 00:10:28.748 } 00:10:28.748 ] 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.748 BaseBdev4 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.748 [ 00:10:28.748 { 00:10:28.748 "name": "BaseBdev4", 00:10:28.748 "aliases": [ 00:10:28.748 "1e1be905-2fa9-4929-b002-f80adebd49b5" 00:10:28.748 ], 00:10:28.748 "product_name": "Malloc disk", 00:10:28.748 "block_size": 512, 00:10:28.748 "num_blocks": 65536, 00:10:28.748 "uuid": "1e1be905-2fa9-4929-b002-f80adebd49b5", 00:10:28.748 "assigned_rate_limits": { 00:10:28.748 "rw_ios_per_sec": 0, 00:10:28.748 "rw_mbytes_per_sec": 0, 00:10:28.748 "r_mbytes_per_sec": 0, 00:10:28.748 "w_mbytes_per_sec": 0 00:10:28.748 }, 00:10:28.748 "claimed": false, 00:10:28.748 "zoned": false, 00:10:28.748 "supported_io_types": { 00:10:28.748 "read": true, 00:10:28.748 "write": true, 00:10:28.748 "unmap": true, 00:10:28.748 "flush": true, 00:10:28.748 "reset": true, 00:10:28.748 "nvme_admin": false, 00:10:28.748 "nvme_io": false, 00:10:28.748 "nvme_io_md": false, 00:10:28.748 "write_zeroes": true, 00:10:28.748 "zcopy": true, 00:10:28.748 "get_zone_info": false, 00:10:28.748 "zone_management": false, 00:10:28.748 "zone_append": false, 00:10:28.748 "compare": false, 00:10:28.748 "compare_and_write": false, 00:10:28.748 "abort": true, 00:10:28.748 "seek_hole": false, 00:10:28.748 "seek_data": false, 00:10:28.748 "copy": true, 00:10:28.748 "nvme_iov_md": false 00:10:28.748 }, 00:10:28.748 "memory_domains": [ 00:10:28.748 { 00:10:28.748 "dma_device_id": "system", 00:10:28.748 "dma_device_type": 1 00:10:28.748 }, 00:10:28.748 { 00:10:28.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.748 "dma_device_type": 2 00:10:28.748 } 00:10:28.748 ], 00:10:28.748 "driver_specific": {} 00:10:28.748 } 00:10:28.748 ] 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.748 [2024-12-07 02:43:39.731178] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:28.748 [2024-12-07 02:43:39.731265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:28.748 [2024-12-07 02:43:39.731310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.748 [2024-12-07 02:43:39.733391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.748 [2024-12-07 02:43:39.733477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.748 "name": "Existed_Raid", 00:10:28.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.748 "strip_size_kb": 64, 00:10:28.748 "state": "configuring", 00:10:28.748 "raid_level": "concat", 00:10:28.748 "superblock": false, 00:10:28.748 "num_base_bdevs": 4, 00:10:28.748 "num_base_bdevs_discovered": 3, 00:10:28.748 "num_base_bdevs_operational": 4, 00:10:28.748 "base_bdevs_list": [ 00:10:28.748 { 00:10:28.748 "name": "BaseBdev1", 00:10:28.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.748 "is_configured": false, 00:10:28.748 "data_offset": 0, 00:10:28.748 "data_size": 0 00:10:28.748 }, 00:10:28.748 { 00:10:28.748 "name": "BaseBdev2", 00:10:28.748 "uuid": "48432e93-e8e4-4d01-9c65-5d2a99e15679", 00:10:28.748 "is_configured": true, 00:10:28.748 "data_offset": 0, 00:10:28.748 "data_size": 65536 00:10:28.748 }, 00:10:28.748 { 00:10:28.748 "name": "BaseBdev3", 00:10:28.748 "uuid": "a89c4601-c796-40ba-920a-47d7c499ce62", 00:10:28.748 "is_configured": true, 00:10:28.748 "data_offset": 0, 00:10:28.748 "data_size": 65536 00:10:28.748 }, 00:10:28.748 { 00:10:28.748 "name": "BaseBdev4", 00:10:28.748 "uuid": "1e1be905-2fa9-4929-b002-f80adebd49b5", 00:10:28.748 "is_configured": true, 00:10:28.748 "data_offset": 0, 00:10:28.748 "data_size": 65536 00:10:28.748 } 00:10:28.748 ] 00:10:28.748 }' 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.748 02:43:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.319 [2024-12-07 02:43:40.146409] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.319 "name": "Existed_Raid", 00:10:29.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.319 "strip_size_kb": 64, 00:10:29.319 "state": "configuring", 00:10:29.319 "raid_level": "concat", 00:10:29.319 "superblock": false, 00:10:29.319 "num_base_bdevs": 4, 00:10:29.319 "num_base_bdevs_discovered": 2, 00:10:29.319 "num_base_bdevs_operational": 4, 00:10:29.319 "base_bdevs_list": [ 00:10:29.319 { 00:10:29.319 "name": "BaseBdev1", 00:10:29.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.319 "is_configured": false, 00:10:29.319 "data_offset": 0, 00:10:29.319 "data_size": 0 00:10:29.319 }, 00:10:29.319 { 00:10:29.319 "name": null, 00:10:29.319 "uuid": "48432e93-e8e4-4d01-9c65-5d2a99e15679", 00:10:29.319 "is_configured": false, 00:10:29.319 "data_offset": 0, 00:10:29.319 "data_size": 65536 00:10:29.319 }, 00:10:29.319 { 00:10:29.319 "name": "BaseBdev3", 00:10:29.319 "uuid": "a89c4601-c796-40ba-920a-47d7c499ce62", 00:10:29.319 "is_configured": true, 00:10:29.319 "data_offset": 0, 00:10:29.319 "data_size": 65536 00:10:29.319 }, 00:10:29.319 { 00:10:29.319 "name": "BaseBdev4", 00:10:29.319 "uuid": "1e1be905-2fa9-4929-b002-f80adebd49b5", 00:10:29.319 "is_configured": true, 00:10:29.319 "data_offset": 0, 00:10:29.319 "data_size": 65536 00:10:29.319 } 00:10:29.319 ] 00:10:29.319 }' 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.319 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.588 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.588 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:29.588 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.588 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.588 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.588 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:29.588 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:29.588 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.588 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.858 [2024-12-07 02:43:40.662461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.858 BaseBdev1 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.858 [ 00:10:29.858 { 00:10:29.858 "name": "BaseBdev1", 00:10:29.858 "aliases": [ 00:10:29.858 "46eece7d-03bf-43a5-9f99-4de259da9afd" 00:10:29.858 ], 00:10:29.858 "product_name": "Malloc disk", 00:10:29.858 "block_size": 512, 00:10:29.858 "num_blocks": 65536, 00:10:29.858 "uuid": "46eece7d-03bf-43a5-9f99-4de259da9afd", 00:10:29.858 "assigned_rate_limits": { 00:10:29.858 "rw_ios_per_sec": 0, 00:10:29.858 "rw_mbytes_per_sec": 0, 00:10:29.858 "r_mbytes_per_sec": 0, 00:10:29.858 "w_mbytes_per_sec": 0 00:10:29.858 }, 00:10:29.858 "claimed": true, 00:10:29.858 "claim_type": "exclusive_write", 00:10:29.858 "zoned": false, 00:10:29.858 "supported_io_types": { 00:10:29.858 "read": true, 00:10:29.858 "write": true, 00:10:29.858 "unmap": true, 00:10:29.858 "flush": true, 00:10:29.858 "reset": true, 00:10:29.858 "nvme_admin": false, 00:10:29.858 "nvme_io": false, 00:10:29.858 "nvme_io_md": false, 00:10:29.858 "write_zeroes": true, 00:10:29.858 "zcopy": true, 00:10:29.858 "get_zone_info": false, 00:10:29.858 "zone_management": false, 00:10:29.858 "zone_append": false, 00:10:29.858 "compare": false, 00:10:29.858 "compare_and_write": false, 00:10:29.858 "abort": true, 00:10:29.858 "seek_hole": false, 00:10:29.858 "seek_data": false, 00:10:29.858 "copy": true, 00:10:29.858 "nvme_iov_md": false 00:10:29.858 }, 00:10:29.858 "memory_domains": [ 00:10:29.858 { 00:10:29.858 "dma_device_id": "system", 00:10:29.858 "dma_device_type": 1 00:10:29.858 }, 00:10:29.858 { 00:10:29.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.858 "dma_device_type": 2 00:10:29.858 } 00:10:29.858 ], 00:10:29.858 "driver_specific": {} 00:10:29.858 } 00:10:29.858 ] 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.858 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.858 "name": "Existed_Raid", 00:10:29.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.858 "strip_size_kb": 64, 00:10:29.858 "state": "configuring", 00:10:29.858 "raid_level": "concat", 00:10:29.858 "superblock": false, 00:10:29.858 "num_base_bdevs": 4, 00:10:29.858 "num_base_bdevs_discovered": 3, 00:10:29.858 "num_base_bdevs_operational": 4, 00:10:29.858 "base_bdevs_list": [ 00:10:29.858 { 00:10:29.858 "name": "BaseBdev1", 00:10:29.858 "uuid": "46eece7d-03bf-43a5-9f99-4de259da9afd", 00:10:29.858 "is_configured": true, 00:10:29.858 "data_offset": 0, 00:10:29.858 "data_size": 65536 00:10:29.858 }, 00:10:29.858 { 00:10:29.858 "name": null, 00:10:29.858 "uuid": "48432e93-e8e4-4d01-9c65-5d2a99e15679", 00:10:29.858 "is_configured": false, 00:10:29.858 "data_offset": 0, 00:10:29.858 "data_size": 65536 00:10:29.858 }, 00:10:29.858 { 00:10:29.858 "name": "BaseBdev3", 00:10:29.858 "uuid": "a89c4601-c796-40ba-920a-47d7c499ce62", 00:10:29.859 "is_configured": true, 00:10:29.859 "data_offset": 0, 00:10:29.859 "data_size": 65536 00:10:29.859 }, 00:10:29.859 { 00:10:29.859 "name": "BaseBdev4", 00:10:29.859 "uuid": "1e1be905-2fa9-4929-b002-f80adebd49b5", 00:10:29.859 "is_configured": true, 00:10:29.859 "data_offset": 0, 00:10:29.859 "data_size": 65536 00:10:29.859 } 00:10:29.859 ] 00:10:29.859 }' 00:10:29.859 02:43:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.859 02:43:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.118 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:30.118 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.118 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.118 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.118 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.118 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:30.118 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:30.118 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.118 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.118 [2024-12-07 02:43:41.173620] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.118 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.118 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.119 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.119 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.119 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.119 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.119 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.119 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.119 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.119 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.119 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.119 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.119 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.119 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.119 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.379 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.379 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.379 "name": "Existed_Raid", 00:10:30.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.379 "strip_size_kb": 64, 00:10:30.379 "state": "configuring", 00:10:30.379 "raid_level": "concat", 00:10:30.379 "superblock": false, 00:10:30.379 "num_base_bdevs": 4, 00:10:30.379 "num_base_bdevs_discovered": 2, 00:10:30.379 "num_base_bdevs_operational": 4, 00:10:30.379 "base_bdevs_list": [ 00:10:30.379 { 00:10:30.379 "name": "BaseBdev1", 00:10:30.379 "uuid": "46eece7d-03bf-43a5-9f99-4de259da9afd", 00:10:30.379 "is_configured": true, 00:10:30.379 "data_offset": 0, 00:10:30.379 "data_size": 65536 00:10:30.379 }, 00:10:30.379 { 00:10:30.379 "name": null, 00:10:30.379 "uuid": "48432e93-e8e4-4d01-9c65-5d2a99e15679", 00:10:30.379 "is_configured": false, 00:10:30.379 "data_offset": 0, 00:10:30.379 "data_size": 65536 00:10:30.379 }, 00:10:30.379 { 00:10:30.379 "name": null, 00:10:30.379 "uuid": "a89c4601-c796-40ba-920a-47d7c499ce62", 00:10:30.379 "is_configured": false, 00:10:30.379 "data_offset": 0, 00:10:30.379 "data_size": 65536 00:10:30.379 }, 00:10:30.379 { 00:10:30.379 "name": "BaseBdev4", 00:10:30.379 "uuid": "1e1be905-2fa9-4929-b002-f80adebd49b5", 00:10:30.379 "is_configured": true, 00:10:30.379 "data_offset": 0, 00:10:30.379 "data_size": 65536 00:10:30.379 } 00:10:30.379 ] 00:10:30.379 }' 00:10:30.379 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.379 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.639 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.639 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.639 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:30.639 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.639 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.639 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:30.639 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:30.639 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.639 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.639 [2024-12-07 02:43:41.576987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.639 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.639 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.639 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.640 "name": "Existed_Raid", 00:10:30.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.640 "strip_size_kb": 64, 00:10:30.640 "state": "configuring", 00:10:30.640 "raid_level": "concat", 00:10:30.640 "superblock": false, 00:10:30.640 "num_base_bdevs": 4, 00:10:30.640 "num_base_bdevs_discovered": 3, 00:10:30.640 "num_base_bdevs_operational": 4, 00:10:30.640 "base_bdevs_list": [ 00:10:30.640 { 00:10:30.640 "name": "BaseBdev1", 00:10:30.640 "uuid": "46eece7d-03bf-43a5-9f99-4de259da9afd", 00:10:30.640 "is_configured": true, 00:10:30.640 "data_offset": 0, 00:10:30.640 "data_size": 65536 00:10:30.640 }, 00:10:30.640 { 00:10:30.640 "name": null, 00:10:30.640 "uuid": "48432e93-e8e4-4d01-9c65-5d2a99e15679", 00:10:30.640 "is_configured": false, 00:10:30.640 "data_offset": 0, 00:10:30.640 "data_size": 65536 00:10:30.640 }, 00:10:30.640 { 00:10:30.640 "name": "BaseBdev3", 00:10:30.640 "uuid": "a89c4601-c796-40ba-920a-47d7c499ce62", 00:10:30.640 "is_configured": true, 00:10:30.640 "data_offset": 0, 00:10:30.640 "data_size": 65536 00:10:30.640 }, 00:10:30.640 { 00:10:30.640 "name": "BaseBdev4", 00:10:30.640 "uuid": "1e1be905-2fa9-4929-b002-f80adebd49b5", 00:10:30.640 "is_configured": true, 00:10:30.640 "data_offset": 0, 00:10:30.640 "data_size": 65536 00:10:30.640 } 00:10:30.640 ] 00:10:30.640 }' 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.640 02:43:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.209 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.209 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.209 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.209 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.209 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.209 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:31.209 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:31.209 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.209 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.210 [2024-12-07 02:43:42.084133] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.210 "name": "Existed_Raid", 00:10:31.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.210 "strip_size_kb": 64, 00:10:31.210 "state": "configuring", 00:10:31.210 "raid_level": "concat", 00:10:31.210 "superblock": false, 00:10:31.210 "num_base_bdevs": 4, 00:10:31.210 "num_base_bdevs_discovered": 2, 00:10:31.210 "num_base_bdevs_operational": 4, 00:10:31.210 "base_bdevs_list": [ 00:10:31.210 { 00:10:31.210 "name": null, 00:10:31.210 "uuid": "46eece7d-03bf-43a5-9f99-4de259da9afd", 00:10:31.210 "is_configured": false, 00:10:31.210 "data_offset": 0, 00:10:31.210 "data_size": 65536 00:10:31.210 }, 00:10:31.210 { 00:10:31.210 "name": null, 00:10:31.210 "uuid": "48432e93-e8e4-4d01-9c65-5d2a99e15679", 00:10:31.210 "is_configured": false, 00:10:31.210 "data_offset": 0, 00:10:31.210 "data_size": 65536 00:10:31.210 }, 00:10:31.210 { 00:10:31.210 "name": "BaseBdev3", 00:10:31.210 "uuid": "a89c4601-c796-40ba-920a-47d7c499ce62", 00:10:31.210 "is_configured": true, 00:10:31.210 "data_offset": 0, 00:10:31.210 "data_size": 65536 00:10:31.210 }, 00:10:31.210 { 00:10:31.210 "name": "BaseBdev4", 00:10:31.210 "uuid": "1e1be905-2fa9-4929-b002-f80adebd49b5", 00:10:31.210 "is_configured": true, 00:10:31.210 "data_offset": 0, 00:10:31.210 "data_size": 65536 00:10:31.210 } 00:10:31.210 ] 00:10:31.210 }' 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.210 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.470 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.470 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:31.470 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.470 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.730 [2024-12-07 02:43:42.587242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.730 "name": "Existed_Raid", 00:10:31.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.730 "strip_size_kb": 64, 00:10:31.730 "state": "configuring", 00:10:31.730 "raid_level": "concat", 00:10:31.730 "superblock": false, 00:10:31.730 "num_base_bdevs": 4, 00:10:31.730 "num_base_bdevs_discovered": 3, 00:10:31.730 "num_base_bdevs_operational": 4, 00:10:31.730 "base_bdevs_list": [ 00:10:31.730 { 00:10:31.730 "name": null, 00:10:31.730 "uuid": "46eece7d-03bf-43a5-9f99-4de259da9afd", 00:10:31.730 "is_configured": false, 00:10:31.730 "data_offset": 0, 00:10:31.730 "data_size": 65536 00:10:31.730 }, 00:10:31.730 { 00:10:31.730 "name": "BaseBdev2", 00:10:31.730 "uuid": "48432e93-e8e4-4d01-9c65-5d2a99e15679", 00:10:31.730 "is_configured": true, 00:10:31.730 "data_offset": 0, 00:10:31.730 "data_size": 65536 00:10:31.730 }, 00:10:31.730 { 00:10:31.730 "name": "BaseBdev3", 00:10:31.730 "uuid": "a89c4601-c796-40ba-920a-47d7c499ce62", 00:10:31.730 "is_configured": true, 00:10:31.730 "data_offset": 0, 00:10:31.730 "data_size": 65536 00:10:31.730 }, 00:10:31.730 { 00:10:31.730 "name": "BaseBdev4", 00:10:31.730 "uuid": "1e1be905-2fa9-4929-b002-f80adebd49b5", 00:10:31.730 "is_configured": true, 00:10:31.730 "data_offset": 0, 00:10:31.730 "data_size": 65536 00:10:31.730 } 00:10:31.730 ] 00:10:31.730 }' 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.730 02:43:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.990 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.990 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.990 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.990 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:31.990 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.990 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:31.990 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:31.990 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.990 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.990 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.249 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.249 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 46eece7d-03bf-43a5-9f99-4de259da9afd 00:10:32.249 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.249 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.249 [2024-12-07 02:43:43.116673] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:32.249 [2024-12-07 02:43:43.116752] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:32.249 [2024-12-07 02:43:43.116764] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:32.249 [2024-12-07 02:43:43.117055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:32.249 [2024-12-07 02:43:43.117194] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:32.249 [2024-12-07 02:43:43.117208] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:32.249 [2024-12-07 02:43:43.117409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.249 NewBaseBdev 00:10:32.249 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.249 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:32.249 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:32.249 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.249 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:32.249 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.249 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.249 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.250 [ 00:10:32.250 { 00:10:32.250 "name": "NewBaseBdev", 00:10:32.250 "aliases": [ 00:10:32.250 "46eece7d-03bf-43a5-9f99-4de259da9afd" 00:10:32.250 ], 00:10:32.250 "product_name": "Malloc disk", 00:10:32.250 "block_size": 512, 00:10:32.250 "num_blocks": 65536, 00:10:32.250 "uuid": "46eece7d-03bf-43a5-9f99-4de259da9afd", 00:10:32.250 "assigned_rate_limits": { 00:10:32.250 "rw_ios_per_sec": 0, 00:10:32.250 "rw_mbytes_per_sec": 0, 00:10:32.250 "r_mbytes_per_sec": 0, 00:10:32.250 "w_mbytes_per_sec": 0 00:10:32.250 }, 00:10:32.250 "claimed": true, 00:10:32.250 "claim_type": "exclusive_write", 00:10:32.250 "zoned": false, 00:10:32.250 "supported_io_types": { 00:10:32.250 "read": true, 00:10:32.250 "write": true, 00:10:32.250 "unmap": true, 00:10:32.250 "flush": true, 00:10:32.250 "reset": true, 00:10:32.250 "nvme_admin": false, 00:10:32.250 "nvme_io": false, 00:10:32.250 "nvme_io_md": false, 00:10:32.250 "write_zeroes": true, 00:10:32.250 "zcopy": true, 00:10:32.250 "get_zone_info": false, 00:10:32.250 "zone_management": false, 00:10:32.250 "zone_append": false, 00:10:32.250 "compare": false, 00:10:32.250 "compare_and_write": false, 00:10:32.250 "abort": true, 00:10:32.250 "seek_hole": false, 00:10:32.250 "seek_data": false, 00:10:32.250 "copy": true, 00:10:32.250 "nvme_iov_md": false 00:10:32.250 }, 00:10:32.250 "memory_domains": [ 00:10:32.250 { 00:10:32.250 "dma_device_id": "system", 00:10:32.250 "dma_device_type": 1 00:10:32.250 }, 00:10:32.250 { 00:10:32.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.250 "dma_device_type": 2 00:10:32.250 } 00:10:32.250 ], 00:10:32.250 "driver_specific": {} 00:10:32.250 } 00:10:32.250 ] 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.250 "name": "Existed_Raid", 00:10:32.250 "uuid": "1c0ea73b-5fbb-4705-a676-69889d0ecb52", 00:10:32.250 "strip_size_kb": 64, 00:10:32.250 "state": "online", 00:10:32.250 "raid_level": "concat", 00:10:32.250 "superblock": false, 00:10:32.250 "num_base_bdevs": 4, 00:10:32.250 "num_base_bdevs_discovered": 4, 00:10:32.250 "num_base_bdevs_operational": 4, 00:10:32.250 "base_bdevs_list": [ 00:10:32.250 { 00:10:32.250 "name": "NewBaseBdev", 00:10:32.250 "uuid": "46eece7d-03bf-43a5-9f99-4de259da9afd", 00:10:32.250 "is_configured": true, 00:10:32.250 "data_offset": 0, 00:10:32.250 "data_size": 65536 00:10:32.250 }, 00:10:32.250 { 00:10:32.250 "name": "BaseBdev2", 00:10:32.250 "uuid": "48432e93-e8e4-4d01-9c65-5d2a99e15679", 00:10:32.250 "is_configured": true, 00:10:32.250 "data_offset": 0, 00:10:32.250 "data_size": 65536 00:10:32.250 }, 00:10:32.250 { 00:10:32.250 "name": "BaseBdev3", 00:10:32.250 "uuid": "a89c4601-c796-40ba-920a-47d7c499ce62", 00:10:32.250 "is_configured": true, 00:10:32.250 "data_offset": 0, 00:10:32.250 "data_size": 65536 00:10:32.250 }, 00:10:32.250 { 00:10:32.250 "name": "BaseBdev4", 00:10:32.250 "uuid": "1e1be905-2fa9-4929-b002-f80adebd49b5", 00:10:32.250 "is_configured": true, 00:10:32.250 "data_offset": 0, 00:10:32.250 "data_size": 65536 00:10:32.250 } 00:10:32.250 ] 00:10:32.250 }' 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.250 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.819 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:32.819 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:32.819 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.819 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.819 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.819 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.819 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.819 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:32.819 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.819 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.819 [2024-12-07 02:43:43.612109] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.819 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.819 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.819 "name": "Existed_Raid", 00:10:32.819 "aliases": [ 00:10:32.819 "1c0ea73b-5fbb-4705-a676-69889d0ecb52" 00:10:32.819 ], 00:10:32.819 "product_name": "Raid Volume", 00:10:32.819 "block_size": 512, 00:10:32.819 "num_blocks": 262144, 00:10:32.819 "uuid": "1c0ea73b-5fbb-4705-a676-69889d0ecb52", 00:10:32.819 "assigned_rate_limits": { 00:10:32.819 "rw_ios_per_sec": 0, 00:10:32.819 "rw_mbytes_per_sec": 0, 00:10:32.819 "r_mbytes_per_sec": 0, 00:10:32.819 "w_mbytes_per_sec": 0 00:10:32.819 }, 00:10:32.819 "claimed": false, 00:10:32.819 "zoned": false, 00:10:32.819 "supported_io_types": { 00:10:32.819 "read": true, 00:10:32.819 "write": true, 00:10:32.819 "unmap": true, 00:10:32.819 "flush": true, 00:10:32.819 "reset": true, 00:10:32.819 "nvme_admin": false, 00:10:32.819 "nvme_io": false, 00:10:32.819 "nvme_io_md": false, 00:10:32.819 "write_zeroes": true, 00:10:32.819 "zcopy": false, 00:10:32.819 "get_zone_info": false, 00:10:32.819 "zone_management": false, 00:10:32.819 "zone_append": false, 00:10:32.819 "compare": false, 00:10:32.819 "compare_and_write": false, 00:10:32.819 "abort": false, 00:10:32.819 "seek_hole": false, 00:10:32.819 "seek_data": false, 00:10:32.819 "copy": false, 00:10:32.819 "nvme_iov_md": false 00:10:32.819 }, 00:10:32.819 "memory_domains": [ 00:10:32.819 { 00:10:32.819 "dma_device_id": "system", 00:10:32.819 "dma_device_type": 1 00:10:32.819 }, 00:10:32.819 { 00:10:32.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.819 "dma_device_type": 2 00:10:32.819 }, 00:10:32.819 { 00:10:32.819 "dma_device_id": "system", 00:10:32.819 "dma_device_type": 1 00:10:32.819 }, 00:10:32.819 { 00:10:32.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.819 "dma_device_type": 2 00:10:32.819 }, 00:10:32.819 { 00:10:32.820 "dma_device_id": "system", 00:10:32.820 "dma_device_type": 1 00:10:32.820 }, 00:10:32.820 { 00:10:32.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.820 "dma_device_type": 2 00:10:32.820 }, 00:10:32.820 { 00:10:32.820 "dma_device_id": "system", 00:10:32.820 "dma_device_type": 1 00:10:32.820 }, 00:10:32.820 { 00:10:32.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.820 "dma_device_type": 2 00:10:32.820 } 00:10:32.820 ], 00:10:32.820 "driver_specific": { 00:10:32.820 "raid": { 00:10:32.820 "uuid": "1c0ea73b-5fbb-4705-a676-69889d0ecb52", 00:10:32.820 "strip_size_kb": 64, 00:10:32.820 "state": "online", 00:10:32.820 "raid_level": "concat", 00:10:32.820 "superblock": false, 00:10:32.820 "num_base_bdevs": 4, 00:10:32.820 "num_base_bdevs_discovered": 4, 00:10:32.820 "num_base_bdevs_operational": 4, 00:10:32.820 "base_bdevs_list": [ 00:10:32.820 { 00:10:32.820 "name": "NewBaseBdev", 00:10:32.820 "uuid": "46eece7d-03bf-43a5-9f99-4de259da9afd", 00:10:32.820 "is_configured": true, 00:10:32.820 "data_offset": 0, 00:10:32.820 "data_size": 65536 00:10:32.820 }, 00:10:32.820 { 00:10:32.820 "name": "BaseBdev2", 00:10:32.820 "uuid": "48432e93-e8e4-4d01-9c65-5d2a99e15679", 00:10:32.820 "is_configured": true, 00:10:32.820 "data_offset": 0, 00:10:32.820 "data_size": 65536 00:10:32.820 }, 00:10:32.820 { 00:10:32.820 "name": "BaseBdev3", 00:10:32.820 "uuid": "a89c4601-c796-40ba-920a-47d7c499ce62", 00:10:32.820 "is_configured": true, 00:10:32.820 "data_offset": 0, 00:10:32.820 "data_size": 65536 00:10:32.820 }, 00:10:32.820 { 00:10:32.820 "name": "BaseBdev4", 00:10:32.820 "uuid": "1e1be905-2fa9-4929-b002-f80adebd49b5", 00:10:32.820 "is_configured": true, 00:10:32.820 "data_offset": 0, 00:10:32.820 "data_size": 65536 00:10:32.820 } 00:10:32.820 ] 00:10:32.820 } 00:10:32.820 } 00:10:32.820 }' 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:32.820 BaseBdev2 00:10:32.820 BaseBdev3 00:10:32.820 BaseBdev4' 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.820 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.081 [2024-12-07 02:43:43.939373] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.081 [2024-12-07 02:43:43.939403] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.081 [2024-12-07 02:43:43.939504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.081 [2024-12-07 02:43:43.939578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.081 [2024-12-07 02:43:43.939588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82401 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82401 ']' 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82401 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82401 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:33.081 killing process with pid 82401 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82401' 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 82401 00:10:33.081 [2024-12-07 02:43:43.991701] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.081 02:43:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 82401 00:10:33.081 [2024-12-07 02:43:44.069214] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.649 02:43:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:33.649 00:10:33.649 real 0m9.804s 00:10:33.649 user 0m16.434s 00:10:33.649 sys 0m2.171s 00:10:33.649 ************************************ 00:10:33.649 END TEST raid_state_function_test 00:10:33.649 ************************************ 00:10:33.649 02:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.649 02:43:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.649 02:43:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:33.649 02:43:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:33.649 02:43:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.649 02:43:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.649 ************************************ 00:10:33.649 START TEST raid_state_function_test_sb 00:10:33.649 ************************************ 00:10:33.649 02:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83056 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83056' 00:10:33.650 Process raid pid: 83056 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83056 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83056 ']' 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:33.650 02:43:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.650 [2024-12-07 02:43:44.625502] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:33.650 [2024-12-07 02:43:44.625736] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.909 [2024-12-07 02:43:44.791775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.909 [2024-12-07 02:43:44.861896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.909 [2024-12-07 02:43:44.937988] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.909 [2024-12-07 02:43:44.938108] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.478 [2024-12-07 02:43:45.457790] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.478 [2024-12-07 02:43:45.457841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.478 [2024-12-07 02:43:45.457853] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.478 [2024-12-07 02:43:45.457880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.478 [2024-12-07 02:43:45.457886] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.478 [2024-12-07 02:43:45.457900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.478 [2024-12-07 02:43:45.457906] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.478 [2024-12-07 02:43:45.457915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.478 "name": "Existed_Raid", 00:10:34.478 "uuid": "2b76adef-3eff-4409-b1d9-29fd623ec67a", 00:10:34.478 "strip_size_kb": 64, 00:10:34.478 "state": "configuring", 00:10:34.478 "raid_level": "concat", 00:10:34.478 "superblock": true, 00:10:34.478 "num_base_bdevs": 4, 00:10:34.478 "num_base_bdevs_discovered": 0, 00:10:34.478 "num_base_bdevs_operational": 4, 00:10:34.478 "base_bdevs_list": [ 00:10:34.478 { 00:10:34.478 "name": "BaseBdev1", 00:10:34.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.478 "is_configured": false, 00:10:34.478 "data_offset": 0, 00:10:34.478 "data_size": 0 00:10:34.478 }, 00:10:34.478 { 00:10:34.478 "name": "BaseBdev2", 00:10:34.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.478 "is_configured": false, 00:10:34.478 "data_offset": 0, 00:10:34.478 "data_size": 0 00:10:34.478 }, 00:10:34.478 { 00:10:34.478 "name": "BaseBdev3", 00:10:34.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.478 "is_configured": false, 00:10:34.478 "data_offset": 0, 00:10:34.478 "data_size": 0 00:10:34.478 }, 00:10:34.478 { 00:10:34.478 "name": "BaseBdev4", 00:10:34.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.478 "is_configured": false, 00:10:34.478 "data_offset": 0, 00:10:34.478 "data_size": 0 00:10:34.478 } 00:10:34.478 ] 00:10:34.478 }' 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.478 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.046 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.046 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.046 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.046 [2024-12-07 02:43:45.852996] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.046 [2024-12-07 02:43:45.853085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:35.046 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.046 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.046 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.046 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.046 [2024-12-07 02:43:45.865028] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.046 [2024-12-07 02:43:45.865100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.046 [2024-12-07 02:43:45.865126] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.046 [2024-12-07 02:43:45.865149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.047 [2024-12-07 02:43:45.865166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.047 [2024-12-07 02:43:45.865186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.047 [2024-12-07 02:43:45.865204] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:35.047 [2024-12-07 02:43:45.865225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.047 [2024-12-07 02:43:45.892000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.047 BaseBdev1 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.047 [ 00:10:35.047 { 00:10:35.047 "name": "BaseBdev1", 00:10:35.047 "aliases": [ 00:10:35.047 "7ee6a7a4-a540-493c-b67d-72abebaa5c9d" 00:10:35.047 ], 00:10:35.047 "product_name": "Malloc disk", 00:10:35.047 "block_size": 512, 00:10:35.047 "num_blocks": 65536, 00:10:35.047 "uuid": "7ee6a7a4-a540-493c-b67d-72abebaa5c9d", 00:10:35.047 "assigned_rate_limits": { 00:10:35.047 "rw_ios_per_sec": 0, 00:10:35.047 "rw_mbytes_per_sec": 0, 00:10:35.047 "r_mbytes_per_sec": 0, 00:10:35.047 "w_mbytes_per_sec": 0 00:10:35.047 }, 00:10:35.047 "claimed": true, 00:10:35.047 "claim_type": "exclusive_write", 00:10:35.047 "zoned": false, 00:10:35.047 "supported_io_types": { 00:10:35.047 "read": true, 00:10:35.047 "write": true, 00:10:35.047 "unmap": true, 00:10:35.047 "flush": true, 00:10:35.047 "reset": true, 00:10:35.047 "nvme_admin": false, 00:10:35.047 "nvme_io": false, 00:10:35.047 "nvme_io_md": false, 00:10:35.047 "write_zeroes": true, 00:10:35.047 "zcopy": true, 00:10:35.047 "get_zone_info": false, 00:10:35.047 "zone_management": false, 00:10:35.047 "zone_append": false, 00:10:35.047 "compare": false, 00:10:35.047 "compare_and_write": false, 00:10:35.047 "abort": true, 00:10:35.047 "seek_hole": false, 00:10:35.047 "seek_data": false, 00:10:35.047 "copy": true, 00:10:35.047 "nvme_iov_md": false 00:10:35.047 }, 00:10:35.047 "memory_domains": [ 00:10:35.047 { 00:10:35.047 "dma_device_id": "system", 00:10:35.047 "dma_device_type": 1 00:10:35.047 }, 00:10:35.047 { 00:10:35.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.047 "dma_device_type": 2 00:10:35.047 } 00:10:35.047 ], 00:10:35.047 "driver_specific": {} 00:10:35.047 } 00:10:35.047 ] 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.047 "name": "Existed_Raid", 00:10:35.047 "uuid": "97802bd6-56c3-45d2-96ac-4f83a7df119b", 00:10:35.047 "strip_size_kb": 64, 00:10:35.047 "state": "configuring", 00:10:35.047 "raid_level": "concat", 00:10:35.047 "superblock": true, 00:10:35.047 "num_base_bdevs": 4, 00:10:35.047 "num_base_bdevs_discovered": 1, 00:10:35.047 "num_base_bdevs_operational": 4, 00:10:35.047 "base_bdevs_list": [ 00:10:35.047 { 00:10:35.047 "name": "BaseBdev1", 00:10:35.047 "uuid": "7ee6a7a4-a540-493c-b67d-72abebaa5c9d", 00:10:35.047 "is_configured": true, 00:10:35.047 "data_offset": 2048, 00:10:35.047 "data_size": 63488 00:10:35.047 }, 00:10:35.047 { 00:10:35.047 "name": "BaseBdev2", 00:10:35.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.047 "is_configured": false, 00:10:35.047 "data_offset": 0, 00:10:35.047 "data_size": 0 00:10:35.047 }, 00:10:35.047 { 00:10:35.047 "name": "BaseBdev3", 00:10:35.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.047 "is_configured": false, 00:10:35.047 "data_offset": 0, 00:10:35.047 "data_size": 0 00:10:35.047 }, 00:10:35.047 { 00:10:35.047 "name": "BaseBdev4", 00:10:35.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.047 "is_configured": false, 00:10:35.047 "data_offset": 0, 00:10:35.047 "data_size": 0 00:10:35.047 } 00:10:35.047 ] 00:10:35.047 }' 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.047 02:43:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.307 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.307 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.307 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.307 [2024-12-07 02:43:46.379252] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.307 [2024-12-07 02:43:46.379301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:35.566 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.566 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.566 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.566 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.566 [2024-12-07 02:43:46.391285] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.566 [2024-12-07 02:43:46.393384] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.566 [2024-12-07 02:43:46.393476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.566 [2024-12-07 02:43:46.393489] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.567 [2024-12-07 02:43:46.393498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.567 [2024-12-07 02:43:46.393504] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:35.567 [2024-12-07 02:43:46.393512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.567 "name": "Existed_Raid", 00:10:35.567 "uuid": "62350f80-1867-47cf-9d46-9cd9ad22dd7a", 00:10:35.567 "strip_size_kb": 64, 00:10:35.567 "state": "configuring", 00:10:35.567 "raid_level": "concat", 00:10:35.567 "superblock": true, 00:10:35.567 "num_base_bdevs": 4, 00:10:35.567 "num_base_bdevs_discovered": 1, 00:10:35.567 "num_base_bdevs_operational": 4, 00:10:35.567 "base_bdevs_list": [ 00:10:35.567 { 00:10:35.567 "name": "BaseBdev1", 00:10:35.567 "uuid": "7ee6a7a4-a540-493c-b67d-72abebaa5c9d", 00:10:35.567 "is_configured": true, 00:10:35.567 "data_offset": 2048, 00:10:35.567 "data_size": 63488 00:10:35.567 }, 00:10:35.567 { 00:10:35.567 "name": "BaseBdev2", 00:10:35.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.567 "is_configured": false, 00:10:35.567 "data_offset": 0, 00:10:35.567 "data_size": 0 00:10:35.567 }, 00:10:35.567 { 00:10:35.567 "name": "BaseBdev3", 00:10:35.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.567 "is_configured": false, 00:10:35.567 "data_offset": 0, 00:10:35.567 "data_size": 0 00:10:35.567 }, 00:10:35.567 { 00:10:35.567 "name": "BaseBdev4", 00:10:35.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.567 "is_configured": false, 00:10:35.567 "data_offset": 0, 00:10:35.567 "data_size": 0 00:10:35.567 } 00:10:35.567 ] 00:10:35.567 }' 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.567 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.826 [2024-12-07 02:43:46.881744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.826 BaseBdev2 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.826 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.085 [ 00:10:36.085 { 00:10:36.085 "name": "BaseBdev2", 00:10:36.085 "aliases": [ 00:10:36.085 "61c27b59-fe89-4568-8542-bec162d67113" 00:10:36.085 ], 00:10:36.085 "product_name": "Malloc disk", 00:10:36.085 "block_size": 512, 00:10:36.085 "num_blocks": 65536, 00:10:36.085 "uuid": "61c27b59-fe89-4568-8542-bec162d67113", 00:10:36.085 "assigned_rate_limits": { 00:10:36.085 "rw_ios_per_sec": 0, 00:10:36.085 "rw_mbytes_per_sec": 0, 00:10:36.085 "r_mbytes_per_sec": 0, 00:10:36.085 "w_mbytes_per_sec": 0 00:10:36.085 }, 00:10:36.085 "claimed": true, 00:10:36.085 "claim_type": "exclusive_write", 00:10:36.085 "zoned": false, 00:10:36.085 "supported_io_types": { 00:10:36.085 "read": true, 00:10:36.085 "write": true, 00:10:36.085 "unmap": true, 00:10:36.085 "flush": true, 00:10:36.085 "reset": true, 00:10:36.085 "nvme_admin": false, 00:10:36.085 "nvme_io": false, 00:10:36.085 "nvme_io_md": false, 00:10:36.085 "write_zeroes": true, 00:10:36.085 "zcopy": true, 00:10:36.085 "get_zone_info": false, 00:10:36.085 "zone_management": false, 00:10:36.085 "zone_append": false, 00:10:36.085 "compare": false, 00:10:36.085 "compare_and_write": false, 00:10:36.085 "abort": true, 00:10:36.085 "seek_hole": false, 00:10:36.085 "seek_data": false, 00:10:36.085 "copy": true, 00:10:36.085 "nvme_iov_md": false 00:10:36.085 }, 00:10:36.085 "memory_domains": [ 00:10:36.085 { 00:10:36.085 "dma_device_id": "system", 00:10:36.085 "dma_device_type": 1 00:10:36.085 }, 00:10:36.085 { 00:10:36.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.085 "dma_device_type": 2 00:10:36.085 } 00:10:36.085 ], 00:10:36.085 "driver_specific": {} 00:10:36.085 } 00:10:36.085 ] 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.085 "name": "Existed_Raid", 00:10:36.085 "uuid": "62350f80-1867-47cf-9d46-9cd9ad22dd7a", 00:10:36.085 "strip_size_kb": 64, 00:10:36.085 "state": "configuring", 00:10:36.085 "raid_level": "concat", 00:10:36.085 "superblock": true, 00:10:36.085 "num_base_bdevs": 4, 00:10:36.085 "num_base_bdevs_discovered": 2, 00:10:36.085 "num_base_bdevs_operational": 4, 00:10:36.085 "base_bdevs_list": [ 00:10:36.085 { 00:10:36.085 "name": "BaseBdev1", 00:10:36.085 "uuid": "7ee6a7a4-a540-493c-b67d-72abebaa5c9d", 00:10:36.085 "is_configured": true, 00:10:36.085 "data_offset": 2048, 00:10:36.085 "data_size": 63488 00:10:36.085 }, 00:10:36.085 { 00:10:36.085 "name": "BaseBdev2", 00:10:36.085 "uuid": "61c27b59-fe89-4568-8542-bec162d67113", 00:10:36.085 "is_configured": true, 00:10:36.085 "data_offset": 2048, 00:10:36.085 "data_size": 63488 00:10:36.085 }, 00:10:36.085 { 00:10:36.085 "name": "BaseBdev3", 00:10:36.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.085 "is_configured": false, 00:10:36.085 "data_offset": 0, 00:10:36.085 "data_size": 0 00:10:36.085 }, 00:10:36.085 { 00:10:36.085 "name": "BaseBdev4", 00:10:36.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.085 "is_configured": false, 00:10:36.085 "data_offset": 0, 00:10:36.085 "data_size": 0 00:10:36.085 } 00:10:36.085 ] 00:10:36.085 }' 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.085 02:43:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.345 [2024-12-07 02:43:47.341701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.345 BaseBdev3 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.345 [ 00:10:36.345 { 00:10:36.345 "name": "BaseBdev3", 00:10:36.345 "aliases": [ 00:10:36.345 "cd23e964-002f-409d-94cc-49324bee9630" 00:10:36.345 ], 00:10:36.345 "product_name": "Malloc disk", 00:10:36.345 "block_size": 512, 00:10:36.345 "num_blocks": 65536, 00:10:36.345 "uuid": "cd23e964-002f-409d-94cc-49324bee9630", 00:10:36.345 "assigned_rate_limits": { 00:10:36.345 "rw_ios_per_sec": 0, 00:10:36.345 "rw_mbytes_per_sec": 0, 00:10:36.345 "r_mbytes_per_sec": 0, 00:10:36.345 "w_mbytes_per_sec": 0 00:10:36.345 }, 00:10:36.345 "claimed": true, 00:10:36.345 "claim_type": "exclusive_write", 00:10:36.345 "zoned": false, 00:10:36.345 "supported_io_types": { 00:10:36.345 "read": true, 00:10:36.345 "write": true, 00:10:36.345 "unmap": true, 00:10:36.345 "flush": true, 00:10:36.345 "reset": true, 00:10:36.345 "nvme_admin": false, 00:10:36.345 "nvme_io": false, 00:10:36.345 "nvme_io_md": false, 00:10:36.345 "write_zeroes": true, 00:10:36.345 "zcopy": true, 00:10:36.345 "get_zone_info": false, 00:10:36.345 "zone_management": false, 00:10:36.345 "zone_append": false, 00:10:36.345 "compare": false, 00:10:36.345 "compare_and_write": false, 00:10:36.345 "abort": true, 00:10:36.345 "seek_hole": false, 00:10:36.345 "seek_data": false, 00:10:36.345 "copy": true, 00:10:36.345 "nvme_iov_md": false 00:10:36.345 }, 00:10:36.345 "memory_domains": [ 00:10:36.345 { 00:10:36.345 "dma_device_id": "system", 00:10:36.345 "dma_device_type": 1 00:10:36.345 }, 00:10:36.345 { 00:10:36.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.345 "dma_device_type": 2 00:10:36.345 } 00:10:36.345 ], 00:10:36.345 "driver_specific": {} 00:10:36.345 } 00:10:36.345 ] 00:10:36.345 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.346 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.604 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.604 "name": "Existed_Raid", 00:10:36.604 "uuid": "62350f80-1867-47cf-9d46-9cd9ad22dd7a", 00:10:36.604 "strip_size_kb": 64, 00:10:36.604 "state": "configuring", 00:10:36.604 "raid_level": "concat", 00:10:36.604 "superblock": true, 00:10:36.604 "num_base_bdevs": 4, 00:10:36.604 "num_base_bdevs_discovered": 3, 00:10:36.604 "num_base_bdevs_operational": 4, 00:10:36.604 "base_bdevs_list": [ 00:10:36.604 { 00:10:36.604 "name": "BaseBdev1", 00:10:36.604 "uuid": "7ee6a7a4-a540-493c-b67d-72abebaa5c9d", 00:10:36.604 "is_configured": true, 00:10:36.604 "data_offset": 2048, 00:10:36.604 "data_size": 63488 00:10:36.604 }, 00:10:36.604 { 00:10:36.604 "name": "BaseBdev2", 00:10:36.604 "uuid": "61c27b59-fe89-4568-8542-bec162d67113", 00:10:36.604 "is_configured": true, 00:10:36.604 "data_offset": 2048, 00:10:36.604 "data_size": 63488 00:10:36.604 }, 00:10:36.604 { 00:10:36.604 "name": "BaseBdev3", 00:10:36.604 "uuid": "cd23e964-002f-409d-94cc-49324bee9630", 00:10:36.604 "is_configured": true, 00:10:36.604 "data_offset": 2048, 00:10:36.604 "data_size": 63488 00:10:36.604 }, 00:10:36.604 { 00:10:36.604 "name": "BaseBdev4", 00:10:36.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.604 "is_configured": false, 00:10:36.604 "data_offset": 0, 00:10:36.604 "data_size": 0 00:10:36.604 } 00:10:36.604 ] 00:10:36.604 }' 00:10:36.604 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.604 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.865 [2024-12-07 02:43:47.793713] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.865 [2024-12-07 02:43:47.793956] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:36.865 [2024-12-07 02:43:47.793982] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:36.865 [2024-12-07 02:43:47.794283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:36.865 BaseBdev4 00:10:36.865 [2024-12-07 02:43:47.794419] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:36.865 [2024-12-07 02:43:47.794433] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:36.865 [2024-12-07 02:43:47.794544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.865 [ 00:10:36.865 { 00:10:36.865 "name": "BaseBdev4", 00:10:36.865 "aliases": [ 00:10:36.865 "806b9077-bca4-461b-89d9-c2e6dbd2be9a" 00:10:36.865 ], 00:10:36.865 "product_name": "Malloc disk", 00:10:36.865 "block_size": 512, 00:10:36.865 "num_blocks": 65536, 00:10:36.865 "uuid": "806b9077-bca4-461b-89d9-c2e6dbd2be9a", 00:10:36.865 "assigned_rate_limits": { 00:10:36.865 "rw_ios_per_sec": 0, 00:10:36.865 "rw_mbytes_per_sec": 0, 00:10:36.865 "r_mbytes_per_sec": 0, 00:10:36.865 "w_mbytes_per_sec": 0 00:10:36.865 }, 00:10:36.865 "claimed": true, 00:10:36.865 "claim_type": "exclusive_write", 00:10:36.865 "zoned": false, 00:10:36.865 "supported_io_types": { 00:10:36.865 "read": true, 00:10:36.865 "write": true, 00:10:36.865 "unmap": true, 00:10:36.865 "flush": true, 00:10:36.865 "reset": true, 00:10:36.865 "nvme_admin": false, 00:10:36.865 "nvme_io": false, 00:10:36.865 "nvme_io_md": false, 00:10:36.865 "write_zeroes": true, 00:10:36.865 "zcopy": true, 00:10:36.865 "get_zone_info": false, 00:10:36.865 "zone_management": false, 00:10:36.865 "zone_append": false, 00:10:36.865 "compare": false, 00:10:36.865 "compare_and_write": false, 00:10:36.865 "abort": true, 00:10:36.865 "seek_hole": false, 00:10:36.865 "seek_data": false, 00:10:36.865 "copy": true, 00:10:36.865 "nvme_iov_md": false 00:10:36.865 }, 00:10:36.865 "memory_domains": [ 00:10:36.865 { 00:10:36.865 "dma_device_id": "system", 00:10:36.865 "dma_device_type": 1 00:10:36.865 }, 00:10:36.865 { 00:10:36.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.865 "dma_device_type": 2 00:10:36.865 } 00:10:36.865 ], 00:10:36.865 "driver_specific": {} 00:10:36.865 } 00:10:36.865 ] 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.865 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.865 "name": "Existed_Raid", 00:10:36.865 "uuid": "62350f80-1867-47cf-9d46-9cd9ad22dd7a", 00:10:36.865 "strip_size_kb": 64, 00:10:36.865 "state": "online", 00:10:36.865 "raid_level": "concat", 00:10:36.865 "superblock": true, 00:10:36.865 "num_base_bdevs": 4, 00:10:36.865 "num_base_bdevs_discovered": 4, 00:10:36.865 "num_base_bdevs_operational": 4, 00:10:36.865 "base_bdevs_list": [ 00:10:36.865 { 00:10:36.865 "name": "BaseBdev1", 00:10:36.865 "uuid": "7ee6a7a4-a540-493c-b67d-72abebaa5c9d", 00:10:36.865 "is_configured": true, 00:10:36.865 "data_offset": 2048, 00:10:36.865 "data_size": 63488 00:10:36.865 }, 00:10:36.865 { 00:10:36.865 "name": "BaseBdev2", 00:10:36.865 "uuid": "61c27b59-fe89-4568-8542-bec162d67113", 00:10:36.865 "is_configured": true, 00:10:36.865 "data_offset": 2048, 00:10:36.865 "data_size": 63488 00:10:36.865 }, 00:10:36.865 { 00:10:36.865 "name": "BaseBdev3", 00:10:36.865 "uuid": "cd23e964-002f-409d-94cc-49324bee9630", 00:10:36.865 "is_configured": true, 00:10:36.865 "data_offset": 2048, 00:10:36.865 "data_size": 63488 00:10:36.865 }, 00:10:36.865 { 00:10:36.865 "name": "BaseBdev4", 00:10:36.866 "uuid": "806b9077-bca4-461b-89d9-c2e6dbd2be9a", 00:10:36.866 "is_configured": true, 00:10:36.866 "data_offset": 2048, 00:10:36.866 "data_size": 63488 00:10:36.866 } 00:10:36.866 ] 00:10:36.866 }' 00:10:36.866 02:43:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.866 02:43:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.434 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:37.434 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:37.434 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.434 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.434 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.434 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.434 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:37.434 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.434 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.434 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.434 [2024-12-07 02:43:48.285251] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.434 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.434 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.434 "name": "Existed_Raid", 00:10:37.434 "aliases": [ 00:10:37.434 "62350f80-1867-47cf-9d46-9cd9ad22dd7a" 00:10:37.434 ], 00:10:37.434 "product_name": "Raid Volume", 00:10:37.434 "block_size": 512, 00:10:37.434 "num_blocks": 253952, 00:10:37.434 "uuid": "62350f80-1867-47cf-9d46-9cd9ad22dd7a", 00:10:37.434 "assigned_rate_limits": { 00:10:37.434 "rw_ios_per_sec": 0, 00:10:37.434 "rw_mbytes_per_sec": 0, 00:10:37.434 "r_mbytes_per_sec": 0, 00:10:37.434 "w_mbytes_per_sec": 0 00:10:37.434 }, 00:10:37.434 "claimed": false, 00:10:37.434 "zoned": false, 00:10:37.434 "supported_io_types": { 00:10:37.434 "read": true, 00:10:37.434 "write": true, 00:10:37.434 "unmap": true, 00:10:37.434 "flush": true, 00:10:37.434 "reset": true, 00:10:37.434 "nvme_admin": false, 00:10:37.434 "nvme_io": false, 00:10:37.434 "nvme_io_md": false, 00:10:37.434 "write_zeroes": true, 00:10:37.434 "zcopy": false, 00:10:37.434 "get_zone_info": false, 00:10:37.434 "zone_management": false, 00:10:37.434 "zone_append": false, 00:10:37.434 "compare": false, 00:10:37.434 "compare_and_write": false, 00:10:37.434 "abort": false, 00:10:37.434 "seek_hole": false, 00:10:37.434 "seek_data": false, 00:10:37.434 "copy": false, 00:10:37.434 "nvme_iov_md": false 00:10:37.434 }, 00:10:37.434 "memory_domains": [ 00:10:37.434 { 00:10:37.435 "dma_device_id": "system", 00:10:37.435 "dma_device_type": 1 00:10:37.435 }, 00:10:37.435 { 00:10:37.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.435 "dma_device_type": 2 00:10:37.435 }, 00:10:37.435 { 00:10:37.435 "dma_device_id": "system", 00:10:37.435 "dma_device_type": 1 00:10:37.435 }, 00:10:37.435 { 00:10:37.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.435 "dma_device_type": 2 00:10:37.435 }, 00:10:37.435 { 00:10:37.435 "dma_device_id": "system", 00:10:37.435 "dma_device_type": 1 00:10:37.435 }, 00:10:37.435 { 00:10:37.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.435 "dma_device_type": 2 00:10:37.435 }, 00:10:37.435 { 00:10:37.435 "dma_device_id": "system", 00:10:37.435 "dma_device_type": 1 00:10:37.435 }, 00:10:37.435 { 00:10:37.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.435 "dma_device_type": 2 00:10:37.435 } 00:10:37.435 ], 00:10:37.435 "driver_specific": { 00:10:37.435 "raid": { 00:10:37.435 "uuid": "62350f80-1867-47cf-9d46-9cd9ad22dd7a", 00:10:37.435 "strip_size_kb": 64, 00:10:37.435 "state": "online", 00:10:37.435 "raid_level": "concat", 00:10:37.435 "superblock": true, 00:10:37.435 "num_base_bdevs": 4, 00:10:37.435 "num_base_bdevs_discovered": 4, 00:10:37.435 "num_base_bdevs_operational": 4, 00:10:37.435 "base_bdevs_list": [ 00:10:37.435 { 00:10:37.435 "name": "BaseBdev1", 00:10:37.435 "uuid": "7ee6a7a4-a540-493c-b67d-72abebaa5c9d", 00:10:37.435 "is_configured": true, 00:10:37.435 "data_offset": 2048, 00:10:37.435 "data_size": 63488 00:10:37.435 }, 00:10:37.435 { 00:10:37.435 "name": "BaseBdev2", 00:10:37.435 "uuid": "61c27b59-fe89-4568-8542-bec162d67113", 00:10:37.435 "is_configured": true, 00:10:37.435 "data_offset": 2048, 00:10:37.435 "data_size": 63488 00:10:37.435 }, 00:10:37.435 { 00:10:37.435 "name": "BaseBdev3", 00:10:37.435 "uuid": "cd23e964-002f-409d-94cc-49324bee9630", 00:10:37.435 "is_configured": true, 00:10:37.435 "data_offset": 2048, 00:10:37.435 "data_size": 63488 00:10:37.435 }, 00:10:37.435 { 00:10:37.435 "name": "BaseBdev4", 00:10:37.435 "uuid": "806b9077-bca4-461b-89d9-c2e6dbd2be9a", 00:10:37.435 "is_configured": true, 00:10:37.435 "data_offset": 2048, 00:10:37.435 "data_size": 63488 00:10:37.435 } 00:10:37.435 ] 00:10:37.435 } 00:10:37.435 } 00:10:37.435 }' 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:37.435 BaseBdev2 00:10:37.435 BaseBdev3 00:10:37.435 BaseBdev4' 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.435 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 [2024-12-07 02:43:48.564474] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.695 [2024-12-07 02:43:48.564507] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.695 [2024-12-07 02:43:48.564557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.695 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.695 "name": "Existed_Raid", 00:10:37.695 "uuid": "62350f80-1867-47cf-9d46-9cd9ad22dd7a", 00:10:37.695 "strip_size_kb": 64, 00:10:37.695 "state": "offline", 00:10:37.695 "raid_level": "concat", 00:10:37.696 "superblock": true, 00:10:37.696 "num_base_bdevs": 4, 00:10:37.696 "num_base_bdevs_discovered": 3, 00:10:37.696 "num_base_bdevs_operational": 3, 00:10:37.696 "base_bdevs_list": [ 00:10:37.696 { 00:10:37.696 "name": null, 00:10:37.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.696 "is_configured": false, 00:10:37.696 "data_offset": 0, 00:10:37.696 "data_size": 63488 00:10:37.696 }, 00:10:37.696 { 00:10:37.696 "name": "BaseBdev2", 00:10:37.696 "uuid": "61c27b59-fe89-4568-8542-bec162d67113", 00:10:37.696 "is_configured": true, 00:10:37.696 "data_offset": 2048, 00:10:37.696 "data_size": 63488 00:10:37.696 }, 00:10:37.696 { 00:10:37.696 "name": "BaseBdev3", 00:10:37.696 "uuid": "cd23e964-002f-409d-94cc-49324bee9630", 00:10:37.696 "is_configured": true, 00:10:37.696 "data_offset": 2048, 00:10:37.696 "data_size": 63488 00:10:37.696 }, 00:10:37.696 { 00:10:37.696 "name": "BaseBdev4", 00:10:37.696 "uuid": "806b9077-bca4-461b-89d9-c2e6dbd2be9a", 00:10:37.696 "is_configured": true, 00:10:37.696 "data_offset": 2048, 00:10:37.696 "data_size": 63488 00:10:37.696 } 00:10:37.696 ] 00:10:37.696 }' 00:10:37.696 02:43:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.696 02:43:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.955 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:37.955 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.216 [2024-12-07 02:43:49.072327] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.216 [2024-12-07 02:43:49.148803] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.216 [2024-12-07 02:43:49.221241] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:38.216 [2024-12-07 02:43:49.221333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.216 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.476 BaseBdev2 00:10:38.476 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.476 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:38.476 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:38.476 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:38.476 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:38.476 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:38.476 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:38.476 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:38.476 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.476 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.477 [ 00:10:38.477 { 00:10:38.477 "name": "BaseBdev2", 00:10:38.477 "aliases": [ 00:10:38.477 "df8f9126-be85-4403-8623-c3988a6d22b8" 00:10:38.477 ], 00:10:38.477 "product_name": "Malloc disk", 00:10:38.477 "block_size": 512, 00:10:38.477 "num_blocks": 65536, 00:10:38.477 "uuid": "df8f9126-be85-4403-8623-c3988a6d22b8", 00:10:38.477 "assigned_rate_limits": { 00:10:38.477 "rw_ios_per_sec": 0, 00:10:38.477 "rw_mbytes_per_sec": 0, 00:10:38.477 "r_mbytes_per_sec": 0, 00:10:38.477 "w_mbytes_per_sec": 0 00:10:38.477 }, 00:10:38.477 "claimed": false, 00:10:38.477 "zoned": false, 00:10:38.477 "supported_io_types": { 00:10:38.477 "read": true, 00:10:38.477 "write": true, 00:10:38.477 "unmap": true, 00:10:38.477 "flush": true, 00:10:38.477 "reset": true, 00:10:38.477 "nvme_admin": false, 00:10:38.477 "nvme_io": false, 00:10:38.477 "nvme_io_md": false, 00:10:38.477 "write_zeroes": true, 00:10:38.477 "zcopy": true, 00:10:38.477 "get_zone_info": false, 00:10:38.477 "zone_management": false, 00:10:38.477 "zone_append": false, 00:10:38.477 "compare": false, 00:10:38.477 "compare_and_write": false, 00:10:38.477 "abort": true, 00:10:38.477 "seek_hole": false, 00:10:38.477 "seek_data": false, 00:10:38.477 "copy": true, 00:10:38.477 "nvme_iov_md": false 00:10:38.477 }, 00:10:38.477 "memory_domains": [ 00:10:38.477 { 00:10:38.477 "dma_device_id": "system", 00:10:38.477 "dma_device_type": 1 00:10:38.477 }, 00:10:38.477 { 00:10:38.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.477 "dma_device_type": 2 00:10:38.477 } 00:10:38.477 ], 00:10:38.477 "driver_specific": {} 00:10:38.477 } 00:10:38.477 ] 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.477 BaseBdev3 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.477 [ 00:10:38.477 { 00:10:38.477 "name": "BaseBdev3", 00:10:38.477 "aliases": [ 00:10:38.477 "2183fc6a-86a0-4264-a406-ef0d84fc24a4" 00:10:38.477 ], 00:10:38.477 "product_name": "Malloc disk", 00:10:38.477 "block_size": 512, 00:10:38.477 "num_blocks": 65536, 00:10:38.477 "uuid": "2183fc6a-86a0-4264-a406-ef0d84fc24a4", 00:10:38.477 "assigned_rate_limits": { 00:10:38.477 "rw_ios_per_sec": 0, 00:10:38.477 "rw_mbytes_per_sec": 0, 00:10:38.477 "r_mbytes_per_sec": 0, 00:10:38.477 "w_mbytes_per_sec": 0 00:10:38.477 }, 00:10:38.477 "claimed": false, 00:10:38.477 "zoned": false, 00:10:38.477 "supported_io_types": { 00:10:38.477 "read": true, 00:10:38.477 "write": true, 00:10:38.477 "unmap": true, 00:10:38.477 "flush": true, 00:10:38.477 "reset": true, 00:10:38.477 "nvme_admin": false, 00:10:38.477 "nvme_io": false, 00:10:38.477 "nvme_io_md": false, 00:10:38.477 "write_zeroes": true, 00:10:38.477 "zcopy": true, 00:10:38.477 "get_zone_info": false, 00:10:38.477 "zone_management": false, 00:10:38.477 "zone_append": false, 00:10:38.477 "compare": false, 00:10:38.477 "compare_and_write": false, 00:10:38.477 "abort": true, 00:10:38.477 "seek_hole": false, 00:10:38.477 "seek_data": false, 00:10:38.477 "copy": true, 00:10:38.477 "nvme_iov_md": false 00:10:38.477 }, 00:10:38.477 "memory_domains": [ 00:10:38.477 { 00:10:38.477 "dma_device_id": "system", 00:10:38.477 "dma_device_type": 1 00:10:38.477 }, 00:10:38.477 { 00:10:38.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.477 "dma_device_type": 2 00:10:38.477 } 00:10:38.477 ], 00:10:38.477 "driver_specific": {} 00:10:38.477 } 00:10:38.477 ] 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.477 BaseBdev4 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.477 [ 00:10:38.477 { 00:10:38.477 "name": "BaseBdev4", 00:10:38.477 "aliases": [ 00:10:38.477 "4ed62cb0-85fb-4e49-8206-39f080b338d7" 00:10:38.477 ], 00:10:38.477 "product_name": "Malloc disk", 00:10:38.477 "block_size": 512, 00:10:38.477 "num_blocks": 65536, 00:10:38.477 "uuid": "4ed62cb0-85fb-4e49-8206-39f080b338d7", 00:10:38.477 "assigned_rate_limits": { 00:10:38.477 "rw_ios_per_sec": 0, 00:10:38.477 "rw_mbytes_per_sec": 0, 00:10:38.477 "r_mbytes_per_sec": 0, 00:10:38.477 "w_mbytes_per_sec": 0 00:10:38.477 }, 00:10:38.477 "claimed": false, 00:10:38.477 "zoned": false, 00:10:38.477 "supported_io_types": { 00:10:38.477 "read": true, 00:10:38.477 "write": true, 00:10:38.477 "unmap": true, 00:10:38.477 "flush": true, 00:10:38.477 "reset": true, 00:10:38.477 "nvme_admin": false, 00:10:38.477 "nvme_io": false, 00:10:38.477 "nvme_io_md": false, 00:10:38.477 "write_zeroes": true, 00:10:38.477 "zcopy": true, 00:10:38.477 "get_zone_info": false, 00:10:38.477 "zone_management": false, 00:10:38.477 "zone_append": false, 00:10:38.477 "compare": false, 00:10:38.477 "compare_and_write": false, 00:10:38.477 "abort": true, 00:10:38.477 "seek_hole": false, 00:10:38.477 "seek_data": false, 00:10:38.477 "copy": true, 00:10:38.477 "nvme_iov_md": false 00:10:38.477 }, 00:10:38.477 "memory_domains": [ 00:10:38.477 { 00:10:38.477 "dma_device_id": "system", 00:10:38.477 "dma_device_type": 1 00:10:38.477 }, 00:10:38.477 { 00:10:38.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.477 "dma_device_type": 2 00:10:38.477 } 00:10:38.477 ], 00:10:38.477 "driver_specific": {} 00:10:38.477 } 00:10:38.477 ] 00:10:38.477 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.478 [2024-12-07 02:43:49.463832] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.478 [2024-12-07 02:43:49.463919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.478 [2024-12-07 02:43:49.463947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.478 [2024-12-07 02:43:49.466037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.478 [2024-12-07 02:43:49.466089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.478 "name": "Existed_Raid", 00:10:38.478 "uuid": "d6962770-dad0-4bae-8ad2-77bf98bf083b", 00:10:38.478 "strip_size_kb": 64, 00:10:38.478 "state": "configuring", 00:10:38.478 "raid_level": "concat", 00:10:38.478 "superblock": true, 00:10:38.478 "num_base_bdevs": 4, 00:10:38.478 "num_base_bdevs_discovered": 3, 00:10:38.478 "num_base_bdevs_operational": 4, 00:10:38.478 "base_bdevs_list": [ 00:10:38.478 { 00:10:38.478 "name": "BaseBdev1", 00:10:38.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.478 "is_configured": false, 00:10:38.478 "data_offset": 0, 00:10:38.478 "data_size": 0 00:10:38.478 }, 00:10:38.478 { 00:10:38.478 "name": "BaseBdev2", 00:10:38.478 "uuid": "df8f9126-be85-4403-8623-c3988a6d22b8", 00:10:38.478 "is_configured": true, 00:10:38.478 "data_offset": 2048, 00:10:38.478 "data_size": 63488 00:10:38.478 }, 00:10:38.478 { 00:10:38.478 "name": "BaseBdev3", 00:10:38.478 "uuid": "2183fc6a-86a0-4264-a406-ef0d84fc24a4", 00:10:38.478 "is_configured": true, 00:10:38.478 "data_offset": 2048, 00:10:38.478 "data_size": 63488 00:10:38.478 }, 00:10:38.478 { 00:10:38.478 "name": "BaseBdev4", 00:10:38.478 "uuid": "4ed62cb0-85fb-4e49-8206-39f080b338d7", 00:10:38.478 "is_configured": true, 00:10:38.478 "data_offset": 2048, 00:10:38.478 "data_size": 63488 00:10:38.478 } 00:10:38.478 ] 00:10:38.478 }' 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.478 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.049 [2024-12-07 02:43:49.911051] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.049 "name": "Existed_Raid", 00:10:39.049 "uuid": "d6962770-dad0-4bae-8ad2-77bf98bf083b", 00:10:39.049 "strip_size_kb": 64, 00:10:39.049 "state": "configuring", 00:10:39.049 "raid_level": "concat", 00:10:39.049 "superblock": true, 00:10:39.049 "num_base_bdevs": 4, 00:10:39.049 "num_base_bdevs_discovered": 2, 00:10:39.049 "num_base_bdevs_operational": 4, 00:10:39.049 "base_bdevs_list": [ 00:10:39.049 { 00:10:39.049 "name": "BaseBdev1", 00:10:39.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.049 "is_configured": false, 00:10:39.049 "data_offset": 0, 00:10:39.049 "data_size": 0 00:10:39.049 }, 00:10:39.049 { 00:10:39.049 "name": null, 00:10:39.049 "uuid": "df8f9126-be85-4403-8623-c3988a6d22b8", 00:10:39.049 "is_configured": false, 00:10:39.049 "data_offset": 0, 00:10:39.049 "data_size": 63488 00:10:39.049 }, 00:10:39.049 { 00:10:39.049 "name": "BaseBdev3", 00:10:39.049 "uuid": "2183fc6a-86a0-4264-a406-ef0d84fc24a4", 00:10:39.049 "is_configured": true, 00:10:39.049 "data_offset": 2048, 00:10:39.049 "data_size": 63488 00:10:39.049 }, 00:10:39.049 { 00:10:39.049 "name": "BaseBdev4", 00:10:39.049 "uuid": "4ed62cb0-85fb-4e49-8206-39f080b338d7", 00:10:39.049 "is_configured": true, 00:10:39.049 "data_offset": 2048, 00:10:39.049 "data_size": 63488 00:10:39.049 } 00:10:39.049 ] 00:10:39.049 }' 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.049 02:43:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.309 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:39.309 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.309 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.309 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.309 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.309 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:39.309 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:39.309 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.309 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.569 [2024-12-07 02:43:50.387377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.569 BaseBdev1 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.569 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.569 [ 00:10:39.569 { 00:10:39.569 "name": "BaseBdev1", 00:10:39.569 "aliases": [ 00:10:39.569 "e8fb5bd1-ef29-484d-8238-24c140e8e97b" 00:10:39.569 ], 00:10:39.569 "product_name": "Malloc disk", 00:10:39.569 "block_size": 512, 00:10:39.569 "num_blocks": 65536, 00:10:39.569 "uuid": "e8fb5bd1-ef29-484d-8238-24c140e8e97b", 00:10:39.569 "assigned_rate_limits": { 00:10:39.569 "rw_ios_per_sec": 0, 00:10:39.569 "rw_mbytes_per_sec": 0, 00:10:39.569 "r_mbytes_per_sec": 0, 00:10:39.569 "w_mbytes_per_sec": 0 00:10:39.569 }, 00:10:39.569 "claimed": true, 00:10:39.569 "claim_type": "exclusive_write", 00:10:39.569 "zoned": false, 00:10:39.569 "supported_io_types": { 00:10:39.569 "read": true, 00:10:39.569 "write": true, 00:10:39.569 "unmap": true, 00:10:39.569 "flush": true, 00:10:39.569 "reset": true, 00:10:39.569 "nvme_admin": false, 00:10:39.569 "nvme_io": false, 00:10:39.569 "nvme_io_md": false, 00:10:39.569 "write_zeroes": true, 00:10:39.569 "zcopy": true, 00:10:39.569 "get_zone_info": false, 00:10:39.570 "zone_management": false, 00:10:39.570 "zone_append": false, 00:10:39.570 "compare": false, 00:10:39.570 "compare_and_write": false, 00:10:39.570 "abort": true, 00:10:39.570 "seek_hole": false, 00:10:39.570 "seek_data": false, 00:10:39.570 "copy": true, 00:10:39.570 "nvme_iov_md": false 00:10:39.570 }, 00:10:39.570 "memory_domains": [ 00:10:39.570 { 00:10:39.570 "dma_device_id": "system", 00:10:39.570 "dma_device_type": 1 00:10:39.570 }, 00:10:39.570 { 00:10:39.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.570 "dma_device_type": 2 00:10:39.570 } 00:10:39.570 ], 00:10:39.570 "driver_specific": {} 00:10:39.570 } 00:10:39.570 ] 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.570 "name": "Existed_Raid", 00:10:39.570 "uuid": "d6962770-dad0-4bae-8ad2-77bf98bf083b", 00:10:39.570 "strip_size_kb": 64, 00:10:39.570 "state": "configuring", 00:10:39.570 "raid_level": "concat", 00:10:39.570 "superblock": true, 00:10:39.570 "num_base_bdevs": 4, 00:10:39.570 "num_base_bdevs_discovered": 3, 00:10:39.570 "num_base_bdevs_operational": 4, 00:10:39.570 "base_bdevs_list": [ 00:10:39.570 { 00:10:39.570 "name": "BaseBdev1", 00:10:39.570 "uuid": "e8fb5bd1-ef29-484d-8238-24c140e8e97b", 00:10:39.570 "is_configured": true, 00:10:39.570 "data_offset": 2048, 00:10:39.570 "data_size": 63488 00:10:39.570 }, 00:10:39.570 { 00:10:39.570 "name": null, 00:10:39.570 "uuid": "df8f9126-be85-4403-8623-c3988a6d22b8", 00:10:39.570 "is_configured": false, 00:10:39.570 "data_offset": 0, 00:10:39.570 "data_size": 63488 00:10:39.570 }, 00:10:39.570 { 00:10:39.570 "name": "BaseBdev3", 00:10:39.570 "uuid": "2183fc6a-86a0-4264-a406-ef0d84fc24a4", 00:10:39.570 "is_configured": true, 00:10:39.570 "data_offset": 2048, 00:10:39.570 "data_size": 63488 00:10:39.570 }, 00:10:39.570 { 00:10:39.570 "name": "BaseBdev4", 00:10:39.570 "uuid": "4ed62cb0-85fb-4e49-8206-39f080b338d7", 00:10:39.570 "is_configured": true, 00:10:39.570 "data_offset": 2048, 00:10:39.570 "data_size": 63488 00:10:39.570 } 00:10:39.570 ] 00:10:39.570 }' 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.570 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.830 [2024-12-07 02:43:50.894523] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.830 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.090 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.090 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.090 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.090 "name": "Existed_Raid", 00:10:40.090 "uuid": "d6962770-dad0-4bae-8ad2-77bf98bf083b", 00:10:40.090 "strip_size_kb": 64, 00:10:40.090 "state": "configuring", 00:10:40.090 "raid_level": "concat", 00:10:40.090 "superblock": true, 00:10:40.090 "num_base_bdevs": 4, 00:10:40.090 "num_base_bdevs_discovered": 2, 00:10:40.090 "num_base_bdevs_operational": 4, 00:10:40.090 "base_bdevs_list": [ 00:10:40.090 { 00:10:40.090 "name": "BaseBdev1", 00:10:40.090 "uuid": "e8fb5bd1-ef29-484d-8238-24c140e8e97b", 00:10:40.090 "is_configured": true, 00:10:40.090 "data_offset": 2048, 00:10:40.090 "data_size": 63488 00:10:40.090 }, 00:10:40.090 { 00:10:40.090 "name": null, 00:10:40.090 "uuid": "df8f9126-be85-4403-8623-c3988a6d22b8", 00:10:40.090 "is_configured": false, 00:10:40.090 "data_offset": 0, 00:10:40.090 "data_size": 63488 00:10:40.090 }, 00:10:40.090 { 00:10:40.090 "name": null, 00:10:40.090 "uuid": "2183fc6a-86a0-4264-a406-ef0d84fc24a4", 00:10:40.090 "is_configured": false, 00:10:40.090 "data_offset": 0, 00:10:40.090 "data_size": 63488 00:10:40.090 }, 00:10:40.090 { 00:10:40.090 "name": "BaseBdev4", 00:10:40.090 "uuid": "4ed62cb0-85fb-4e49-8206-39f080b338d7", 00:10:40.091 "is_configured": true, 00:10:40.091 "data_offset": 2048, 00:10:40.091 "data_size": 63488 00:10:40.091 } 00:10:40.091 ] 00:10:40.091 }' 00:10:40.091 02:43:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.091 02:43:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.351 [2024-12-07 02:43:51.341825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.351 "name": "Existed_Raid", 00:10:40.351 "uuid": "d6962770-dad0-4bae-8ad2-77bf98bf083b", 00:10:40.351 "strip_size_kb": 64, 00:10:40.351 "state": "configuring", 00:10:40.351 "raid_level": "concat", 00:10:40.351 "superblock": true, 00:10:40.351 "num_base_bdevs": 4, 00:10:40.351 "num_base_bdevs_discovered": 3, 00:10:40.351 "num_base_bdevs_operational": 4, 00:10:40.351 "base_bdevs_list": [ 00:10:40.351 { 00:10:40.351 "name": "BaseBdev1", 00:10:40.351 "uuid": "e8fb5bd1-ef29-484d-8238-24c140e8e97b", 00:10:40.351 "is_configured": true, 00:10:40.351 "data_offset": 2048, 00:10:40.351 "data_size": 63488 00:10:40.351 }, 00:10:40.351 { 00:10:40.351 "name": null, 00:10:40.351 "uuid": "df8f9126-be85-4403-8623-c3988a6d22b8", 00:10:40.351 "is_configured": false, 00:10:40.351 "data_offset": 0, 00:10:40.351 "data_size": 63488 00:10:40.351 }, 00:10:40.351 { 00:10:40.351 "name": "BaseBdev3", 00:10:40.351 "uuid": "2183fc6a-86a0-4264-a406-ef0d84fc24a4", 00:10:40.351 "is_configured": true, 00:10:40.351 "data_offset": 2048, 00:10:40.351 "data_size": 63488 00:10:40.351 }, 00:10:40.351 { 00:10:40.351 "name": "BaseBdev4", 00:10:40.351 "uuid": "4ed62cb0-85fb-4e49-8206-39f080b338d7", 00:10:40.351 "is_configured": true, 00:10:40.351 "data_offset": 2048, 00:10:40.351 "data_size": 63488 00:10:40.351 } 00:10:40.351 ] 00:10:40.351 }' 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.351 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.921 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.921 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.921 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.921 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.922 [2024-12-07 02:43:51.828995] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.922 "name": "Existed_Raid", 00:10:40.922 "uuid": "d6962770-dad0-4bae-8ad2-77bf98bf083b", 00:10:40.922 "strip_size_kb": 64, 00:10:40.922 "state": "configuring", 00:10:40.922 "raid_level": "concat", 00:10:40.922 "superblock": true, 00:10:40.922 "num_base_bdevs": 4, 00:10:40.922 "num_base_bdevs_discovered": 2, 00:10:40.922 "num_base_bdevs_operational": 4, 00:10:40.922 "base_bdevs_list": [ 00:10:40.922 { 00:10:40.922 "name": null, 00:10:40.922 "uuid": "e8fb5bd1-ef29-484d-8238-24c140e8e97b", 00:10:40.922 "is_configured": false, 00:10:40.922 "data_offset": 0, 00:10:40.922 "data_size": 63488 00:10:40.922 }, 00:10:40.922 { 00:10:40.922 "name": null, 00:10:40.922 "uuid": "df8f9126-be85-4403-8623-c3988a6d22b8", 00:10:40.922 "is_configured": false, 00:10:40.922 "data_offset": 0, 00:10:40.922 "data_size": 63488 00:10:40.922 }, 00:10:40.922 { 00:10:40.922 "name": "BaseBdev3", 00:10:40.922 "uuid": "2183fc6a-86a0-4264-a406-ef0d84fc24a4", 00:10:40.922 "is_configured": true, 00:10:40.922 "data_offset": 2048, 00:10:40.922 "data_size": 63488 00:10:40.922 }, 00:10:40.922 { 00:10:40.922 "name": "BaseBdev4", 00:10:40.922 "uuid": "4ed62cb0-85fb-4e49-8206-39f080b338d7", 00:10:40.922 "is_configured": true, 00:10:40.922 "data_offset": 2048, 00:10:40.922 "data_size": 63488 00:10:40.922 } 00:10:40.922 ] 00:10:40.922 }' 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.922 02:43:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.491 [2024-12-07 02:43:52.335858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.491 "name": "Existed_Raid", 00:10:41.491 "uuid": "d6962770-dad0-4bae-8ad2-77bf98bf083b", 00:10:41.491 "strip_size_kb": 64, 00:10:41.491 "state": "configuring", 00:10:41.491 "raid_level": "concat", 00:10:41.491 "superblock": true, 00:10:41.491 "num_base_bdevs": 4, 00:10:41.491 "num_base_bdevs_discovered": 3, 00:10:41.491 "num_base_bdevs_operational": 4, 00:10:41.491 "base_bdevs_list": [ 00:10:41.491 { 00:10:41.491 "name": null, 00:10:41.491 "uuid": "e8fb5bd1-ef29-484d-8238-24c140e8e97b", 00:10:41.491 "is_configured": false, 00:10:41.491 "data_offset": 0, 00:10:41.491 "data_size": 63488 00:10:41.491 }, 00:10:41.491 { 00:10:41.491 "name": "BaseBdev2", 00:10:41.491 "uuid": "df8f9126-be85-4403-8623-c3988a6d22b8", 00:10:41.491 "is_configured": true, 00:10:41.491 "data_offset": 2048, 00:10:41.491 "data_size": 63488 00:10:41.491 }, 00:10:41.491 { 00:10:41.491 "name": "BaseBdev3", 00:10:41.491 "uuid": "2183fc6a-86a0-4264-a406-ef0d84fc24a4", 00:10:41.491 "is_configured": true, 00:10:41.491 "data_offset": 2048, 00:10:41.491 "data_size": 63488 00:10:41.491 }, 00:10:41.491 { 00:10:41.491 "name": "BaseBdev4", 00:10:41.491 "uuid": "4ed62cb0-85fb-4e49-8206-39f080b338d7", 00:10:41.491 "is_configured": true, 00:10:41.491 "data_offset": 2048, 00:10:41.491 "data_size": 63488 00:10:41.491 } 00:10:41.491 ] 00:10:41.491 }' 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.491 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e8fb5bd1-ef29-484d-8238-24c140e8e97b 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.750 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.093 [2024-12-07 02:43:52.831833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:42.093 [2024-12-07 02:43:52.832037] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:10:42.093 [2024-12-07 02:43:52.832050] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:42.093 [2024-12-07 02:43:52.832322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:42.093 [2024-12-07 02:43:52.832442] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:10:42.093 [2024-12-07 02:43:52.832455] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:10:42.093 NewBaseBdev 00:10:42.093 [2024-12-07 02:43:52.832553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.093 [ 00:10:42.093 { 00:10:42.093 "name": "NewBaseBdev", 00:10:42.093 "aliases": [ 00:10:42.093 "e8fb5bd1-ef29-484d-8238-24c140e8e97b" 00:10:42.093 ], 00:10:42.093 "product_name": "Malloc disk", 00:10:42.093 "block_size": 512, 00:10:42.093 "num_blocks": 65536, 00:10:42.093 "uuid": "e8fb5bd1-ef29-484d-8238-24c140e8e97b", 00:10:42.093 "assigned_rate_limits": { 00:10:42.093 "rw_ios_per_sec": 0, 00:10:42.093 "rw_mbytes_per_sec": 0, 00:10:42.093 "r_mbytes_per_sec": 0, 00:10:42.093 "w_mbytes_per_sec": 0 00:10:42.093 }, 00:10:42.093 "claimed": true, 00:10:42.093 "claim_type": "exclusive_write", 00:10:42.093 "zoned": false, 00:10:42.093 "supported_io_types": { 00:10:42.093 "read": true, 00:10:42.093 "write": true, 00:10:42.093 "unmap": true, 00:10:42.093 "flush": true, 00:10:42.093 "reset": true, 00:10:42.093 "nvme_admin": false, 00:10:42.093 "nvme_io": false, 00:10:42.093 "nvme_io_md": false, 00:10:42.093 "write_zeroes": true, 00:10:42.093 "zcopy": true, 00:10:42.093 "get_zone_info": false, 00:10:42.093 "zone_management": false, 00:10:42.093 "zone_append": false, 00:10:42.093 "compare": false, 00:10:42.093 "compare_and_write": false, 00:10:42.093 "abort": true, 00:10:42.093 "seek_hole": false, 00:10:42.093 "seek_data": false, 00:10:42.093 "copy": true, 00:10:42.093 "nvme_iov_md": false 00:10:42.093 }, 00:10:42.093 "memory_domains": [ 00:10:42.093 { 00:10:42.093 "dma_device_id": "system", 00:10:42.093 "dma_device_type": 1 00:10:42.093 }, 00:10:42.093 { 00:10:42.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.093 "dma_device_type": 2 00:10:42.093 } 00:10:42.093 ], 00:10:42.093 "driver_specific": {} 00:10:42.093 } 00:10:42.093 ] 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.093 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.094 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.094 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.094 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.094 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.094 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.094 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.094 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.094 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.094 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.094 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.094 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.094 "name": "Existed_Raid", 00:10:42.094 "uuid": "d6962770-dad0-4bae-8ad2-77bf98bf083b", 00:10:42.094 "strip_size_kb": 64, 00:10:42.094 "state": "online", 00:10:42.094 "raid_level": "concat", 00:10:42.094 "superblock": true, 00:10:42.094 "num_base_bdevs": 4, 00:10:42.094 "num_base_bdevs_discovered": 4, 00:10:42.094 "num_base_bdevs_operational": 4, 00:10:42.094 "base_bdevs_list": [ 00:10:42.094 { 00:10:42.094 "name": "NewBaseBdev", 00:10:42.094 "uuid": "e8fb5bd1-ef29-484d-8238-24c140e8e97b", 00:10:42.094 "is_configured": true, 00:10:42.094 "data_offset": 2048, 00:10:42.094 "data_size": 63488 00:10:42.094 }, 00:10:42.094 { 00:10:42.094 "name": "BaseBdev2", 00:10:42.094 "uuid": "df8f9126-be85-4403-8623-c3988a6d22b8", 00:10:42.094 "is_configured": true, 00:10:42.094 "data_offset": 2048, 00:10:42.094 "data_size": 63488 00:10:42.094 }, 00:10:42.094 { 00:10:42.094 "name": "BaseBdev3", 00:10:42.094 "uuid": "2183fc6a-86a0-4264-a406-ef0d84fc24a4", 00:10:42.094 "is_configured": true, 00:10:42.094 "data_offset": 2048, 00:10:42.094 "data_size": 63488 00:10:42.094 }, 00:10:42.094 { 00:10:42.094 "name": "BaseBdev4", 00:10:42.094 "uuid": "4ed62cb0-85fb-4e49-8206-39f080b338d7", 00:10:42.094 "is_configured": true, 00:10:42.094 "data_offset": 2048, 00:10:42.094 "data_size": 63488 00:10:42.094 } 00:10:42.094 ] 00:10:42.094 }' 00:10:42.094 02:43:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.094 02:43:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.382 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:42.382 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:42.382 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:42.382 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:42.382 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:42.382 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:42.382 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:42.382 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:42.382 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.382 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.382 [2024-12-07 02:43:53.331553] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.382 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.382 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.382 "name": "Existed_Raid", 00:10:42.382 "aliases": [ 00:10:42.382 "d6962770-dad0-4bae-8ad2-77bf98bf083b" 00:10:42.382 ], 00:10:42.382 "product_name": "Raid Volume", 00:10:42.382 "block_size": 512, 00:10:42.382 "num_blocks": 253952, 00:10:42.382 "uuid": "d6962770-dad0-4bae-8ad2-77bf98bf083b", 00:10:42.382 "assigned_rate_limits": { 00:10:42.382 "rw_ios_per_sec": 0, 00:10:42.382 "rw_mbytes_per_sec": 0, 00:10:42.382 "r_mbytes_per_sec": 0, 00:10:42.382 "w_mbytes_per_sec": 0 00:10:42.382 }, 00:10:42.382 "claimed": false, 00:10:42.382 "zoned": false, 00:10:42.382 "supported_io_types": { 00:10:42.382 "read": true, 00:10:42.382 "write": true, 00:10:42.382 "unmap": true, 00:10:42.382 "flush": true, 00:10:42.382 "reset": true, 00:10:42.382 "nvme_admin": false, 00:10:42.382 "nvme_io": false, 00:10:42.382 "nvme_io_md": false, 00:10:42.382 "write_zeroes": true, 00:10:42.382 "zcopy": false, 00:10:42.382 "get_zone_info": false, 00:10:42.382 "zone_management": false, 00:10:42.382 "zone_append": false, 00:10:42.382 "compare": false, 00:10:42.382 "compare_and_write": false, 00:10:42.382 "abort": false, 00:10:42.382 "seek_hole": false, 00:10:42.382 "seek_data": false, 00:10:42.382 "copy": false, 00:10:42.382 "nvme_iov_md": false 00:10:42.382 }, 00:10:42.382 "memory_domains": [ 00:10:42.382 { 00:10:42.382 "dma_device_id": "system", 00:10:42.382 "dma_device_type": 1 00:10:42.382 }, 00:10:42.382 { 00:10:42.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.382 "dma_device_type": 2 00:10:42.382 }, 00:10:42.382 { 00:10:42.382 "dma_device_id": "system", 00:10:42.382 "dma_device_type": 1 00:10:42.382 }, 00:10:42.382 { 00:10:42.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.382 "dma_device_type": 2 00:10:42.382 }, 00:10:42.382 { 00:10:42.382 "dma_device_id": "system", 00:10:42.382 "dma_device_type": 1 00:10:42.382 }, 00:10:42.382 { 00:10:42.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.382 "dma_device_type": 2 00:10:42.382 }, 00:10:42.382 { 00:10:42.382 "dma_device_id": "system", 00:10:42.382 "dma_device_type": 1 00:10:42.382 }, 00:10:42.382 { 00:10:42.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.382 "dma_device_type": 2 00:10:42.382 } 00:10:42.382 ], 00:10:42.383 "driver_specific": { 00:10:42.383 "raid": { 00:10:42.383 "uuid": "d6962770-dad0-4bae-8ad2-77bf98bf083b", 00:10:42.383 "strip_size_kb": 64, 00:10:42.383 "state": "online", 00:10:42.383 "raid_level": "concat", 00:10:42.383 "superblock": true, 00:10:42.383 "num_base_bdevs": 4, 00:10:42.383 "num_base_bdevs_discovered": 4, 00:10:42.383 "num_base_bdevs_operational": 4, 00:10:42.383 "base_bdevs_list": [ 00:10:42.383 { 00:10:42.383 "name": "NewBaseBdev", 00:10:42.383 "uuid": "e8fb5bd1-ef29-484d-8238-24c140e8e97b", 00:10:42.383 "is_configured": true, 00:10:42.383 "data_offset": 2048, 00:10:42.383 "data_size": 63488 00:10:42.383 }, 00:10:42.383 { 00:10:42.383 "name": "BaseBdev2", 00:10:42.383 "uuid": "df8f9126-be85-4403-8623-c3988a6d22b8", 00:10:42.383 "is_configured": true, 00:10:42.383 "data_offset": 2048, 00:10:42.383 "data_size": 63488 00:10:42.383 }, 00:10:42.383 { 00:10:42.383 "name": "BaseBdev3", 00:10:42.383 "uuid": "2183fc6a-86a0-4264-a406-ef0d84fc24a4", 00:10:42.383 "is_configured": true, 00:10:42.383 "data_offset": 2048, 00:10:42.383 "data_size": 63488 00:10:42.383 }, 00:10:42.383 { 00:10:42.383 "name": "BaseBdev4", 00:10:42.383 "uuid": "4ed62cb0-85fb-4e49-8206-39f080b338d7", 00:10:42.383 "is_configured": true, 00:10:42.383 "data_offset": 2048, 00:10:42.383 "data_size": 63488 00:10:42.383 } 00:10:42.383 ] 00:10:42.383 } 00:10:42.383 } 00:10:42.383 }' 00:10:42.383 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.383 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:42.383 BaseBdev2 00:10:42.383 BaseBdev3 00:10:42.383 BaseBdev4' 00:10:42.383 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.383 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.383 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.383 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.383 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:42.383 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.383 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.643 [2024-12-07 02:43:53.622683] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.643 [2024-12-07 02:43:53.622755] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.643 [2024-12-07 02:43:53.622848] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.643 [2024-12-07 02:43:53.622935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.643 [2024-12-07 02:43:53.622980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83056 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83056 ']' 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83056 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83056 00:10:42.643 killing process with pid 83056 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83056' 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83056 00:10:42.643 [2024-12-07 02:43:53.671101] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.643 02:43:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83056 00:10:42.903 [2024-12-07 02:43:53.748675] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.163 02:43:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:43.163 00:10:43.163 real 0m9.596s 00:10:43.163 user 0m16.001s 00:10:43.163 sys 0m2.198s 00:10:43.163 02:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.164 02:43:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.164 ************************************ 00:10:43.164 END TEST raid_state_function_test_sb 00:10:43.164 ************************************ 00:10:43.164 02:43:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:43.164 02:43:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:43.164 02:43:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.164 02:43:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.164 ************************************ 00:10:43.164 START TEST raid_superblock_test 00:10:43.164 ************************************ 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83704 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83704 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83704 ']' 00:10:43.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.164 02:43:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.424 [2024-12-07 02:43:54.287721] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:43.424 [2024-12-07 02:43:54.287945] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83704 ] 00:10:43.424 [2024-12-07 02:43:54.451069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.684 [2024-12-07 02:43:54.520929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.684 [2024-12-07 02:43:54.597950] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.684 [2024-12-07 02:43:54.597990] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.253 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.253 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:44.253 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:44.253 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.253 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:44.253 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:44.253 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:44.253 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.254 malloc1 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.254 [2024-12-07 02:43:55.136251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:44.254 [2024-12-07 02:43:55.136381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.254 [2024-12-07 02:43:55.136423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:44.254 [2024-12-07 02:43:55.136460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.254 [2024-12-07 02:43:55.138867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.254 [2024-12-07 02:43:55.138934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:44.254 pt1 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.254 malloc2 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.254 [2024-12-07 02:43:55.183135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.254 [2024-12-07 02:43:55.183223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.254 [2024-12-07 02:43:55.183244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:44.254 [2024-12-07 02:43:55.183255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.254 [2024-12-07 02:43:55.185691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.254 [2024-12-07 02:43:55.185761] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.254 pt2 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.254 malloc3 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.254 [2024-12-07 02:43:55.217677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:44.254 [2024-12-07 02:43:55.217760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.254 [2024-12-07 02:43:55.217794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:44.254 [2024-12-07 02:43:55.217824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.254 [2024-12-07 02:43:55.220138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.254 [2024-12-07 02:43:55.220205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:44.254 pt3 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.254 malloc4 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.254 [2024-12-07 02:43:55.256219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:44.254 [2024-12-07 02:43:55.256304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.254 [2024-12-07 02:43:55.256352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:44.254 [2024-12-07 02:43:55.256390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.254 [2024-12-07 02:43:55.258681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.254 [2024-12-07 02:43:55.258746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:44.254 pt4 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.254 [2024-12-07 02:43:55.268290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:44.254 [2024-12-07 02:43:55.270346] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.254 [2024-12-07 02:43:55.270443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:44.254 [2024-12-07 02:43:55.270523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:44.254 [2024-12-07 02:43:55.270727] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:10:44.254 [2024-12-07 02:43:55.270776] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:44.254 [2024-12-07 02:43:55.271046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:44.254 [2024-12-07 02:43:55.271233] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:10:44.254 [2024-12-07 02:43:55.271271] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:10:44.254 [2024-12-07 02:43:55.271432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.254 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.514 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.514 "name": "raid_bdev1", 00:10:44.514 "uuid": "c25bd5f5-6006-43f0-a0ed-f8fda03b41bf", 00:10:44.514 "strip_size_kb": 64, 00:10:44.514 "state": "online", 00:10:44.514 "raid_level": "concat", 00:10:44.514 "superblock": true, 00:10:44.514 "num_base_bdevs": 4, 00:10:44.514 "num_base_bdevs_discovered": 4, 00:10:44.514 "num_base_bdevs_operational": 4, 00:10:44.514 "base_bdevs_list": [ 00:10:44.514 { 00:10:44.514 "name": "pt1", 00:10:44.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.514 "is_configured": true, 00:10:44.514 "data_offset": 2048, 00:10:44.514 "data_size": 63488 00:10:44.514 }, 00:10:44.514 { 00:10:44.514 "name": "pt2", 00:10:44.514 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.514 "is_configured": true, 00:10:44.514 "data_offset": 2048, 00:10:44.514 "data_size": 63488 00:10:44.514 }, 00:10:44.514 { 00:10:44.514 "name": "pt3", 00:10:44.514 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.514 "is_configured": true, 00:10:44.514 "data_offset": 2048, 00:10:44.514 "data_size": 63488 00:10:44.514 }, 00:10:44.514 { 00:10:44.514 "name": "pt4", 00:10:44.514 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.514 "is_configured": true, 00:10:44.514 "data_offset": 2048, 00:10:44.514 "data_size": 63488 00:10:44.514 } 00:10:44.514 ] 00:10:44.514 }' 00:10:44.514 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.514 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.774 [2024-12-07 02:43:55.723863] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.774 "name": "raid_bdev1", 00:10:44.774 "aliases": [ 00:10:44.774 "c25bd5f5-6006-43f0-a0ed-f8fda03b41bf" 00:10:44.774 ], 00:10:44.774 "product_name": "Raid Volume", 00:10:44.774 "block_size": 512, 00:10:44.774 "num_blocks": 253952, 00:10:44.774 "uuid": "c25bd5f5-6006-43f0-a0ed-f8fda03b41bf", 00:10:44.774 "assigned_rate_limits": { 00:10:44.774 "rw_ios_per_sec": 0, 00:10:44.774 "rw_mbytes_per_sec": 0, 00:10:44.774 "r_mbytes_per_sec": 0, 00:10:44.774 "w_mbytes_per_sec": 0 00:10:44.774 }, 00:10:44.774 "claimed": false, 00:10:44.774 "zoned": false, 00:10:44.774 "supported_io_types": { 00:10:44.774 "read": true, 00:10:44.774 "write": true, 00:10:44.774 "unmap": true, 00:10:44.774 "flush": true, 00:10:44.774 "reset": true, 00:10:44.774 "nvme_admin": false, 00:10:44.774 "nvme_io": false, 00:10:44.774 "nvme_io_md": false, 00:10:44.774 "write_zeroes": true, 00:10:44.774 "zcopy": false, 00:10:44.774 "get_zone_info": false, 00:10:44.774 "zone_management": false, 00:10:44.774 "zone_append": false, 00:10:44.774 "compare": false, 00:10:44.774 "compare_and_write": false, 00:10:44.774 "abort": false, 00:10:44.774 "seek_hole": false, 00:10:44.774 "seek_data": false, 00:10:44.774 "copy": false, 00:10:44.774 "nvme_iov_md": false 00:10:44.774 }, 00:10:44.774 "memory_domains": [ 00:10:44.774 { 00:10:44.774 "dma_device_id": "system", 00:10:44.774 "dma_device_type": 1 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.774 "dma_device_type": 2 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "dma_device_id": "system", 00:10:44.774 "dma_device_type": 1 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.774 "dma_device_type": 2 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "dma_device_id": "system", 00:10:44.774 "dma_device_type": 1 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.774 "dma_device_type": 2 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "dma_device_id": "system", 00:10:44.774 "dma_device_type": 1 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.774 "dma_device_type": 2 00:10:44.774 } 00:10:44.774 ], 00:10:44.774 "driver_specific": { 00:10:44.774 "raid": { 00:10:44.774 "uuid": "c25bd5f5-6006-43f0-a0ed-f8fda03b41bf", 00:10:44.774 "strip_size_kb": 64, 00:10:44.774 "state": "online", 00:10:44.774 "raid_level": "concat", 00:10:44.774 "superblock": true, 00:10:44.774 "num_base_bdevs": 4, 00:10:44.774 "num_base_bdevs_discovered": 4, 00:10:44.774 "num_base_bdevs_operational": 4, 00:10:44.774 "base_bdevs_list": [ 00:10:44.774 { 00:10:44.774 "name": "pt1", 00:10:44.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.774 "is_configured": true, 00:10:44.774 "data_offset": 2048, 00:10:44.774 "data_size": 63488 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "name": "pt2", 00:10:44.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.774 "is_configured": true, 00:10:44.774 "data_offset": 2048, 00:10:44.774 "data_size": 63488 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "name": "pt3", 00:10:44.774 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.774 "is_configured": true, 00:10:44.774 "data_offset": 2048, 00:10:44.774 "data_size": 63488 00:10:44.774 }, 00:10:44.774 { 00:10:44.774 "name": "pt4", 00:10:44.774 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:44.774 "is_configured": true, 00:10:44.774 "data_offset": 2048, 00:10:44.774 "data_size": 63488 00:10:44.774 } 00:10:44.774 ] 00:10:44.774 } 00:10:44.774 } 00:10:44.774 }' 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:44.774 pt2 00:10:44.774 pt3 00:10:44.774 pt4' 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.774 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.035 02:43:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.035 [2024-12-07 02:43:56.019227] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c25bd5f5-6006-43f0-a0ed-f8fda03b41bf 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c25bd5f5-6006-43f0-a0ed-f8fda03b41bf ']' 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.035 [2024-12-07 02:43:56.058880] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.035 [2024-12-07 02:43:56.058911] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.035 [2024-12-07 02:43:56.058987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.035 [2024-12-07 02:43:56.059077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.035 [2024-12-07 02:43:56.059089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.035 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.296 [2024-12-07 02:43:56.222675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:45.296 [2024-12-07 02:43:56.224857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:45.296 [2024-12-07 02:43:56.224954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:45.296 [2024-12-07 02:43:56.225000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:45.296 [2024-12-07 02:43:56.225081] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:45.296 [2024-12-07 02:43:56.225159] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:45.296 [2024-12-07 02:43:56.225245] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:45.296 [2024-12-07 02:43:56.225294] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:45.296 [2024-12-07 02:43:56.225338] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.296 [2024-12-07 02:43:56.225372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:10:45.296 request: 00:10:45.296 { 00:10:45.296 "name": "raid_bdev1", 00:10:45.296 "raid_level": "concat", 00:10:45.296 "base_bdevs": [ 00:10:45.296 "malloc1", 00:10:45.296 "malloc2", 00:10:45.296 "malloc3", 00:10:45.296 "malloc4" 00:10:45.296 ], 00:10:45.296 "strip_size_kb": 64, 00:10:45.296 "superblock": false, 00:10:45.296 "method": "bdev_raid_create", 00:10:45.296 "req_id": 1 00:10:45.296 } 00:10:45.296 Got JSON-RPC error response 00:10:45.296 response: 00:10:45.296 { 00:10:45.296 "code": -17, 00:10:45.296 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:45.296 } 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.296 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.296 [2024-12-07 02:43:56.290497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:45.296 [2024-12-07 02:43:56.290573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.296 [2024-12-07 02:43:56.290621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:45.296 [2024-12-07 02:43:56.290649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.296 [2024-12-07 02:43:56.293063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.296 [2024-12-07 02:43:56.293125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:45.296 [2024-12-07 02:43:56.293210] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:45.296 [2024-12-07 02:43:56.293272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:45.296 pt1 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.297 "name": "raid_bdev1", 00:10:45.297 "uuid": "c25bd5f5-6006-43f0-a0ed-f8fda03b41bf", 00:10:45.297 "strip_size_kb": 64, 00:10:45.297 "state": "configuring", 00:10:45.297 "raid_level": "concat", 00:10:45.297 "superblock": true, 00:10:45.297 "num_base_bdevs": 4, 00:10:45.297 "num_base_bdevs_discovered": 1, 00:10:45.297 "num_base_bdevs_operational": 4, 00:10:45.297 "base_bdevs_list": [ 00:10:45.297 { 00:10:45.297 "name": "pt1", 00:10:45.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.297 "is_configured": true, 00:10:45.297 "data_offset": 2048, 00:10:45.297 "data_size": 63488 00:10:45.297 }, 00:10:45.297 { 00:10:45.297 "name": null, 00:10:45.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.297 "is_configured": false, 00:10:45.297 "data_offset": 2048, 00:10:45.297 "data_size": 63488 00:10:45.297 }, 00:10:45.297 { 00:10:45.297 "name": null, 00:10:45.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.297 "is_configured": false, 00:10:45.297 "data_offset": 2048, 00:10:45.297 "data_size": 63488 00:10:45.297 }, 00:10:45.297 { 00:10:45.297 "name": null, 00:10:45.297 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:45.297 "is_configured": false, 00:10:45.297 "data_offset": 2048, 00:10:45.297 "data_size": 63488 00:10:45.297 } 00:10:45.297 ] 00:10:45.297 }' 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.297 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.867 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:45.867 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:45.867 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.867 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.867 [2024-12-07 02:43:56.733789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:45.867 [2024-12-07 02:43:56.733881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.867 [2024-12-07 02:43:56.733906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:45.867 [2024-12-07 02:43:56.733916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.867 [2024-12-07 02:43:56.734394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.867 [2024-12-07 02:43:56.734411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:45.867 [2024-12-07 02:43:56.734509] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:45.868 [2024-12-07 02:43:56.734532] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:45.868 pt2 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.868 [2024-12-07 02:43:56.745762] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.868 "name": "raid_bdev1", 00:10:45.868 "uuid": "c25bd5f5-6006-43f0-a0ed-f8fda03b41bf", 00:10:45.868 "strip_size_kb": 64, 00:10:45.868 "state": "configuring", 00:10:45.868 "raid_level": "concat", 00:10:45.868 "superblock": true, 00:10:45.868 "num_base_bdevs": 4, 00:10:45.868 "num_base_bdevs_discovered": 1, 00:10:45.868 "num_base_bdevs_operational": 4, 00:10:45.868 "base_bdevs_list": [ 00:10:45.868 { 00:10:45.868 "name": "pt1", 00:10:45.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.868 "is_configured": true, 00:10:45.868 "data_offset": 2048, 00:10:45.868 "data_size": 63488 00:10:45.868 }, 00:10:45.868 { 00:10:45.868 "name": null, 00:10:45.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.868 "is_configured": false, 00:10:45.868 "data_offset": 0, 00:10:45.868 "data_size": 63488 00:10:45.868 }, 00:10:45.868 { 00:10:45.868 "name": null, 00:10:45.868 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.868 "is_configured": false, 00:10:45.868 "data_offset": 2048, 00:10:45.868 "data_size": 63488 00:10:45.868 }, 00:10:45.868 { 00:10:45.868 "name": null, 00:10:45.868 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:45.868 "is_configured": false, 00:10:45.868 "data_offset": 2048, 00:10:45.868 "data_size": 63488 00:10:45.868 } 00:10:45.868 ] 00:10:45.868 }' 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.868 02:43:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.128 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:46.128 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.128 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.128 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.128 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.128 [2024-12-07 02:43:57.165027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.128 [2024-12-07 02:43:57.165155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.128 [2024-12-07 02:43:57.165192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:46.128 [2024-12-07 02:43:57.165261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.128 [2024-12-07 02:43:57.165762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.129 [2024-12-07 02:43:57.165821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.129 [2024-12-07 02:43:57.165939] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:46.129 [2024-12-07 02:43:57.165996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.129 pt2 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.129 [2024-12-07 02:43:57.176946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:46.129 [2024-12-07 02:43:57.177044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.129 [2024-12-07 02:43:57.177080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:46.129 [2024-12-07 02:43:57.177110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.129 [2024-12-07 02:43:57.177513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.129 [2024-12-07 02:43:57.177567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:46.129 [2024-12-07 02:43:57.177679] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:46.129 [2024-12-07 02:43:57.177733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:46.129 pt3 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.129 [2024-12-07 02:43:57.188930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:46.129 [2024-12-07 02:43:57.189017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.129 [2024-12-07 02:43:57.189049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:46.129 [2024-12-07 02:43:57.189078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.129 [2024-12-07 02:43:57.189430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.129 [2024-12-07 02:43:57.189452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:46.129 [2024-12-07 02:43:57.189505] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:46.129 [2024-12-07 02:43:57.189525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:46.129 [2024-12-07 02:43:57.189641] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:46.129 [2024-12-07 02:43:57.189656] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:46.129 [2024-12-07 02:43:57.189894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:46.129 [2024-12-07 02:43:57.190014] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:46.129 [2024-12-07 02:43:57.190023] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:10:46.129 [2024-12-07 02:43:57.190121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.129 pt4 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.129 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.389 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.389 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.389 "name": "raid_bdev1", 00:10:46.389 "uuid": "c25bd5f5-6006-43f0-a0ed-f8fda03b41bf", 00:10:46.389 "strip_size_kb": 64, 00:10:46.389 "state": "online", 00:10:46.389 "raid_level": "concat", 00:10:46.389 "superblock": true, 00:10:46.389 "num_base_bdevs": 4, 00:10:46.389 "num_base_bdevs_discovered": 4, 00:10:46.389 "num_base_bdevs_operational": 4, 00:10:46.389 "base_bdevs_list": [ 00:10:46.389 { 00:10:46.389 "name": "pt1", 00:10:46.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.389 "is_configured": true, 00:10:46.389 "data_offset": 2048, 00:10:46.389 "data_size": 63488 00:10:46.389 }, 00:10:46.389 { 00:10:46.389 "name": "pt2", 00:10:46.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.389 "is_configured": true, 00:10:46.389 "data_offset": 2048, 00:10:46.389 "data_size": 63488 00:10:46.389 }, 00:10:46.389 { 00:10:46.389 "name": "pt3", 00:10:46.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.389 "is_configured": true, 00:10:46.389 "data_offset": 2048, 00:10:46.389 "data_size": 63488 00:10:46.389 }, 00:10:46.389 { 00:10:46.389 "name": "pt4", 00:10:46.389 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.389 "is_configured": true, 00:10:46.389 "data_offset": 2048, 00:10:46.390 "data_size": 63488 00:10:46.390 } 00:10:46.390 ] 00:10:46.390 }' 00:10:46.390 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.390 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.650 [2024-12-07 02:43:57.628628] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.650 "name": "raid_bdev1", 00:10:46.650 "aliases": [ 00:10:46.650 "c25bd5f5-6006-43f0-a0ed-f8fda03b41bf" 00:10:46.650 ], 00:10:46.650 "product_name": "Raid Volume", 00:10:46.650 "block_size": 512, 00:10:46.650 "num_blocks": 253952, 00:10:46.650 "uuid": "c25bd5f5-6006-43f0-a0ed-f8fda03b41bf", 00:10:46.650 "assigned_rate_limits": { 00:10:46.650 "rw_ios_per_sec": 0, 00:10:46.650 "rw_mbytes_per_sec": 0, 00:10:46.650 "r_mbytes_per_sec": 0, 00:10:46.650 "w_mbytes_per_sec": 0 00:10:46.650 }, 00:10:46.650 "claimed": false, 00:10:46.650 "zoned": false, 00:10:46.650 "supported_io_types": { 00:10:46.650 "read": true, 00:10:46.650 "write": true, 00:10:46.650 "unmap": true, 00:10:46.650 "flush": true, 00:10:46.650 "reset": true, 00:10:46.650 "nvme_admin": false, 00:10:46.650 "nvme_io": false, 00:10:46.650 "nvme_io_md": false, 00:10:46.650 "write_zeroes": true, 00:10:46.650 "zcopy": false, 00:10:46.650 "get_zone_info": false, 00:10:46.650 "zone_management": false, 00:10:46.650 "zone_append": false, 00:10:46.650 "compare": false, 00:10:46.650 "compare_and_write": false, 00:10:46.650 "abort": false, 00:10:46.650 "seek_hole": false, 00:10:46.650 "seek_data": false, 00:10:46.650 "copy": false, 00:10:46.650 "nvme_iov_md": false 00:10:46.650 }, 00:10:46.650 "memory_domains": [ 00:10:46.650 { 00:10:46.650 "dma_device_id": "system", 00:10:46.650 "dma_device_type": 1 00:10:46.650 }, 00:10:46.650 { 00:10:46.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.650 "dma_device_type": 2 00:10:46.650 }, 00:10:46.650 { 00:10:46.650 "dma_device_id": "system", 00:10:46.650 "dma_device_type": 1 00:10:46.650 }, 00:10:46.650 { 00:10:46.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.650 "dma_device_type": 2 00:10:46.650 }, 00:10:46.650 { 00:10:46.650 "dma_device_id": "system", 00:10:46.650 "dma_device_type": 1 00:10:46.650 }, 00:10:46.650 { 00:10:46.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.650 "dma_device_type": 2 00:10:46.650 }, 00:10:46.650 { 00:10:46.650 "dma_device_id": "system", 00:10:46.650 "dma_device_type": 1 00:10:46.650 }, 00:10:46.650 { 00:10:46.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.650 "dma_device_type": 2 00:10:46.650 } 00:10:46.650 ], 00:10:46.650 "driver_specific": { 00:10:46.650 "raid": { 00:10:46.650 "uuid": "c25bd5f5-6006-43f0-a0ed-f8fda03b41bf", 00:10:46.650 "strip_size_kb": 64, 00:10:46.650 "state": "online", 00:10:46.650 "raid_level": "concat", 00:10:46.650 "superblock": true, 00:10:46.650 "num_base_bdevs": 4, 00:10:46.650 "num_base_bdevs_discovered": 4, 00:10:46.650 "num_base_bdevs_operational": 4, 00:10:46.650 "base_bdevs_list": [ 00:10:46.650 { 00:10:46.650 "name": "pt1", 00:10:46.650 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:46.650 "is_configured": true, 00:10:46.650 "data_offset": 2048, 00:10:46.650 "data_size": 63488 00:10:46.650 }, 00:10:46.650 { 00:10:46.650 "name": "pt2", 00:10:46.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.650 "is_configured": true, 00:10:46.650 "data_offset": 2048, 00:10:46.650 "data_size": 63488 00:10:46.650 }, 00:10:46.650 { 00:10:46.650 "name": "pt3", 00:10:46.650 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.650 "is_configured": true, 00:10:46.650 "data_offset": 2048, 00:10:46.650 "data_size": 63488 00:10:46.650 }, 00:10:46.650 { 00:10:46.650 "name": "pt4", 00:10:46.650 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:46.650 "is_configured": true, 00:10:46.650 "data_offset": 2048, 00:10:46.650 "data_size": 63488 00:10:46.650 } 00:10:46.650 ] 00:10:46.650 } 00:10:46.650 } 00:10:46.650 }' 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:46.650 pt2 00:10:46.650 pt3 00:10:46.650 pt4' 00:10:46.650 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.910 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:46.911 [2024-12-07 02:43:57.928035] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c25bd5f5-6006-43f0-a0ed-f8fda03b41bf '!=' c25bd5f5-6006-43f0-a0ed-f8fda03b41bf ']' 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83704 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83704 ']' 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83704 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.911 02:43:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83704 00:10:47.171 02:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:47.171 02:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:47.171 02:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83704' 00:10:47.171 killing process with pid 83704 00:10:47.171 02:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83704 00:10:47.171 [2024-12-07 02:43:58.017237] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.171 [2024-12-07 02:43:58.017382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.171 02:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83704 00:10:47.171 [2024-12-07 02:43:58.017483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.171 [2024-12-07 02:43:58.017498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:10:47.171 [2024-12-07 02:43:58.097425] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.431 02:43:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:47.431 00:10:47.431 real 0m4.272s 00:10:47.431 user 0m6.506s 00:10:47.431 sys 0m1.001s 00:10:47.431 02:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.431 02:43:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.431 ************************************ 00:10:47.431 END TEST raid_superblock_test 00:10:47.431 ************************************ 00:10:47.691 02:43:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:47.691 02:43:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:47.691 02:43:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.691 02:43:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.691 ************************************ 00:10:47.691 START TEST raid_read_error_test 00:10:47.691 ************************************ 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rSiYFYuPcj 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83958 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83958 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83958 ']' 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.691 02:43:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.691 [2024-12-07 02:43:58.644403] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:47.691 [2024-12-07 02:43:58.644640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83958 ] 00:10:47.951 [2024-12-07 02:43:58.806283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.951 [2024-12-07 02:43:58.875515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.951 [2024-12-07 02:43:58.951922] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.951 [2024-12-07 02:43:58.952060] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.520 BaseBdev1_malloc 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.520 true 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.520 [2024-12-07 02:43:59.510074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:48.520 [2024-12-07 02:43:59.510131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.520 [2024-12-07 02:43:59.510174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:48.520 [2024-12-07 02:43:59.510183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.520 [2024-12-07 02:43:59.512661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.520 [2024-12-07 02:43:59.512699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:48.520 BaseBdev1 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.520 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.521 BaseBdev2_malloc 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.521 true 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.521 [2024-12-07 02:43:59.570952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:48.521 [2024-12-07 02:43:59.571004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.521 [2024-12-07 02:43:59.571023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:48.521 [2024-12-07 02:43:59.571032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.521 [2024-12-07 02:43:59.573394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.521 [2024-12-07 02:43:59.573475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:48.521 BaseBdev2 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.521 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.781 BaseBdev3_malloc 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.781 true 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.781 [2024-12-07 02:43:59.617698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:48.781 [2024-12-07 02:43:59.617748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.781 [2024-12-07 02:43:59.617767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:48.781 [2024-12-07 02:43:59.617776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.781 [2024-12-07 02:43:59.620129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.781 [2024-12-07 02:43:59.620212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:48.781 BaseBdev3 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.781 BaseBdev4_malloc 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.781 true 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.781 [2024-12-07 02:43:59.664357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:48.781 [2024-12-07 02:43:59.664407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.781 [2024-12-07 02:43:59.664431] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:48.781 [2024-12-07 02:43:59.664440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.781 [2024-12-07 02:43:59.666718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.781 [2024-12-07 02:43:59.666792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:48.781 BaseBdev4 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.781 [2024-12-07 02:43:59.676396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.781 [2024-12-07 02:43:59.678512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.781 [2024-12-07 02:43:59.678647] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.781 [2024-12-07 02:43:59.678724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:48.781 [2024-12-07 02:43:59.678959] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:48.781 [2024-12-07 02:43:59.679004] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.781 [2024-12-07 02:43:59.679278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:48.781 [2024-12-07 02:43:59.679453] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:48.781 [2024-12-07 02:43:59.679507] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:48.781 [2024-12-07 02:43:59.679686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.781 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.782 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.782 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.782 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.782 "name": "raid_bdev1", 00:10:48.782 "uuid": "494f3fb3-9b87-4fea-957d-7017c1cd3947", 00:10:48.782 "strip_size_kb": 64, 00:10:48.782 "state": "online", 00:10:48.782 "raid_level": "concat", 00:10:48.782 "superblock": true, 00:10:48.782 "num_base_bdevs": 4, 00:10:48.782 "num_base_bdevs_discovered": 4, 00:10:48.782 "num_base_bdevs_operational": 4, 00:10:48.782 "base_bdevs_list": [ 00:10:48.782 { 00:10:48.782 "name": "BaseBdev1", 00:10:48.782 "uuid": "fb854146-9175-5313-ae07-f5e5fbcd5554", 00:10:48.782 "is_configured": true, 00:10:48.782 "data_offset": 2048, 00:10:48.782 "data_size": 63488 00:10:48.782 }, 00:10:48.782 { 00:10:48.782 "name": "BaseBdev2", 00:10:48.782 "uuid": "18043bca-de7d-5313-8663-446be529439c", 00:10:48.782 "is_configured": true, 00:10:48.782 "data_offset": 2048, 00:10:48.782 "data_size": 63488 00:10:48.782 }, 00:10:48.782 { 00:10:48.782 "name": "BaseBdev3", 00:10:48.782 "uuid": "c1ec1b30-969b-52d5-a7b6-c4defd2804d3", 00:10:48.782 "is_configured": true, 00:10:48.782 "data_offset": 2048, 00:10:48.782 "data_size": 63488 00:10:48.782 }, 00:10:48.782 { 00:10:48.782 "name": "BaseBdev4", 00:10:48.782 "uuid": "e9bd57f4-0d9a-503e-af74-7f9999c73f9c", 00:10:48.782 "is_configured": true, 00:10:48.782 "data_offset": 2048, 00:10:48.782 "data_size": 63488 00:10:48.782 } 00:10:48.782 ] 00:10:48.782 }' 00:10:48.782 02:43:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.782 02:43:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.352 02:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:49.352 02:44:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:49.352 [2024-12-07 02:44:00.227889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.292 "name": "raid_bdev1", 00:10:50.292 "uuid": "494f3fb3-9b87-4fea-957d-7017c1cd3947", 00:10:50.292 "strip_size_kb": 64, 00:10:50.292 "state": "online", 00:10:50.292 "raid_level": "concat", 00:10:50.292 "superblock": true, 00:10:50.292 "num_base_bdevs": 4, 00:10:50.292 "num_base_bdevs_discovered": 4, 00:10:50.292 "num_base_bdevs_operational": 4, 00:10:50.292 "base_bdevs_list": [ 00:10:50.292 { 00:10:50.292 "name": "BaseBdev1", 00:10:50.292 "uuid": "fb854146-9175-5313-ae07-f5e5fbcd5554", 00:10:50.292 "is_configured": true, 00:10:50.292 "data_offset": 2048, 00:10:50.292 "data_size": 63488 00:10:50.292 }, 00:10:50.292 { 00:10:50.292 "name": "BaseBdev2", 00:10:50.292 "uuid": "18043bca-de7d-5313-8663-446be529439c", 00:10:50.292 "is_configured": true, 00:10:50.292 "data_offset": 2048, 00:10:50.292 "data_size": 63488 00:10:50.292 }, 00:10:50.292 { 00:10:50.292 "name": "BaseBdev3", 00:10:50.292 "uuid": "c1ec1b30-969b-52d5-a7b6-c4defd2804d3", 00:10:50.292 "is_configured": true, 00:10:50.292 "data_offset": 2048, 00:10:50.292 "data_size": 63488 00:10:50.292 }, 00:10:50.292 { 00:10:50.292 "name": "BaseBdev4", 00:10:50.292 "uuid": "e9bd57f4-0d9a-503e-af74-7f9999c73f9c", 00:10:50.292 "is_configured": true, 00:10:50.292 "data_offset": 2048, 00:10:50.292 "data_size": 63488 00:10:50.292 } 00:10:50.292 ] 00:10:50.292 }' 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.292 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.552 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:50.552 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.553 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.553 [2024-12-07 02:44:01.604650] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.553 [2024-12-07 02:44:01.604759] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.553 [2024-12-07 02:44:01.607271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.553 [2024-12-07 02:44:01.607331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.553 [2024-12-07 02:44:01.607382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.553 [2024-12-07 02:44:01.607392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:50.553 { 00:10:50.553 "results": [ 00:10:50.553 { 00:10:50.553 "job": "raid_bdev1", 00:10:50.553 "core_mask": "0x1", 00:10:50.553 "workload": "randrw", 00:10:50.553 "percentage": 50, 00:10:50.553 "status": "finished", 00:10:50.553 "queue_depth": 1, 00:10:50.553 "io_size": 131072, 00:10:50.553 "runtime": 1.37742, 00:10:50.553 "iops": 14641.140683306472, 00:10:50.553 "mibps": 1830.142585413309, 00:10:50.553 "io_failed": 1, 00:10:50.553 "io_timeout": 0, 00:10:50.553 "avg_latency_us": 96.0073665922409, 00:10:50.553 "min_latency_us": 26.047161572052403, 00:10:50.553 "max_latency_us": 1387.989519650655 00:10:50.553 } 00:10:50.553 ], 00:10:50.553 "core_count": 1 00:10:50.553 } 00:10:50.553 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.553 02:44:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83958 00:10:50.553 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83958 ']' 00:10:50.553 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83958 00:10:50.553 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:50.553 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.553 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83958 00:10:50.813 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.813 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.813 killing process with pid 83958 00:10:50.813 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83958' 00:10:50.813 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83958 00:10:50.813 [2024-12-07 02:44:01.637464] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.813 02:44:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83958 00:10:50.813 [2024-12-07 02:44:01.703260] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:51.073 02:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rSiYFYuPcj 00:10:51.073 02:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:51.073 02:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:51.073 02:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:51.073 02:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:51.073 ************************************ 00:10:51.073 END TEST raid_read_error_test 00:10:51.073 ************************************ 00:10:51.073 02:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:51.073 02:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:51.073 02:44:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:51.073 00:10:51.073 real 0m3.539s 00:10:51.073 user 0m4.295s 00:10:51.073 sys 0m0.647s 00:10:51.073 02:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.073 02:44:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.073 02:44:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:51.073 02:44:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:51.073 02:44:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.073 02:44:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:51.334 ************************************ 00:10:51.334 START TEST raid_write_error_test 00:10:51.334 ************************************ 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nloWnOe5yJ 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=84087 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 84087 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 84087 ']' 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.334 02:44:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.334 [2024-12-07 02:44:02.262727] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:51.334 [2024-12-07 02:44:02.262946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84087 ] 00:10:51.595 [2024-12-07 02:44:02.427520] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.595 [2024-12-07 02:44:02.495725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.595 [2024-12-07 02:44:02.571396] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.595 [2024-12-07 02:44:02.571432] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.165 BaseBdev1_malloc 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.165 true 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.165 [2024-12-07 02:44:03.116798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:52.165 [2024-12-07 02:44:03.116901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.165 [2024-12-07 02:44:03.116948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:52.165 [2024-12-07 02:44:03.116959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.165 [2024-12-07 02:44:03.119364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.165 [2024-12-07 02:44:03.119405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:52.165 BaseBdev1 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.165 BaseBdev2_malloc 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.165 true 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.165 [2024-12-07 02:44:03.180516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:52.165 [2024-12-07 02:44:03.180664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.165 [2024-12-07 02:44:03.180702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:52.165 [2024-12-07 02:44:03.180717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.165 [2024-12-07 02:44:03.183393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.165 [2024-12-07 02:44:03.183432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:52.165 BaseBdev2 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.165 BaseBdev3_malloc 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.165 true 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.165 [2024-12-07 02:44:03.226701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:52.165 [2024-12-07 02:44:03.226784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.165 [2024-12-07 02:44:03.226808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:52.165 [2024-12-07 02:44:03.226817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.165 [2024-12-07 02:44:03.229095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.165 [2024-12-07 02:44:03.229130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:52.165 BaseBdev3 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.165 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.425 BaseBdev4_malloc 00:10:52.425 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.426 true 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.426 [2024-12-07 02:44:03.273182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:52.426 [2024-12-07 02:44:03.273264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.426 [2024-12-07 02:44:03.273308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:52.426 [2024-12-07 02:44:03.273317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.426 [2024-12-07 02:44:03.275655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.426 [2024-12-07 02:44:03.275720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:52.426 BaseBdev4 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.426 [2024-12-07 02:44:03.285235] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.426 [2024-12-07 02:44:03.287243] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.426 [2024-12-07 02:44:03.287324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.426 [2024-12-07 02:44:03.287374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:52.426 [2024-12-07 02:44:03.287608] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:10:52.426 [2024-12-07 02:44:03.287641] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:52.426 [2024-12-07 02:44:03.287894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:52.426 [2024-12-07 02:44:03.288048] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:10:52.426 [2024-12-07 02:44:03.288067] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:10:52.426 [2024-12-07 02:44:03.288192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.426 "name": "raid_bdev1", 00:10:52.426 "uuid": "e58b2aa0-e2a9-4b1e-9dd6-e056f88ee8a7", 00:10:52.426 "strip_size_kb": 64, 00:10:52.426 "state": "online", 00:10:52.426 "raid_level": "concat", 00:10:52.426 "superblock": true, 00:10:52.426 "num_base_bdevs": 4, 00:10:52.426 "num_base_bdevs_discovered": 4, 00:10:52.426 "num_base_bdevs_operational": 4, 00:10:52.426 "base_bdevs_list": [ 00:10:52.426 { 00:10:52.426 "name": "BaseBdev1", 00:10:52.426 "uuid": "0eca3000-d7c5-5181-a1d6-c402bab42731", 00:10:52.426 "is_configured": true, 00:10:52.426 "data_offset": 2048, 00:10:52.426 "data_size": 63488 00:10:52.426 }, 00:10:52.426 { 00:10:52.426 "name": "BaseBdev2", 00:10:52.426 "uuid": "9868247c-3171-5cb4-a736-8d8a74441a6c", 00:10:52.426 "is_configured": true, 00:10:52.426 "data_offset": 2048, 00:10:52.426 "data_size": 63488 00:10:52.426 }, 00:10:52.426 { 00:10:52.426 "name": "BaseBdev3", 00:10:52.426 "uuid": "6fa9cf82-09c7-591c-b2cd-038399e0528a", 00:10:52.426 "is_configured": true, 00:10:52.426 "data_offset": 2048, 00:10:52.426 "data_size": 63488 00:10:52.426 }, 00:10:52.426 { 00:10:52.426 "name": "BaseBdev4", 00:10:52.426 "uuid": "d79a3aab-b41f-5a15-a122-525510442b40", 00:10:52.426 "is_configured": true, 00:10:52.426 "data_offset": 2048, 00:10:52.426 "data_size": 63488 00:10:52.426 } 00:10:52.426 ] 00:10:52.426 }' 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.426 02:44:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.686 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:52.686 02:44:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:52.946 [2024-12-07 02:44:03.812725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.886 "name": "raid_bdev1", 00:10:53.886 "uuid": "e58b2aa0-e2a9-4b1e-9dd6-e056f88ee8a7", 00:10:53.886 "strip_size_kb": 64, 00:10:53.886 "state": "online", 00:10:53.886 "raid_level": "concat", 00:10:53.886 "superblock": true, 00:10:53.886 "num_base_bdevs": 4, 00:10:53.886 "num_base_bdevs_discovered": 4, 00:10:53.886 "num_base_bdevs_operational": 4, 00:10:53.886 "base_bdevs_list": [ 00:10:53.886 { 00:10:53.886 "name": "BaseBdev1", 00:10:53.886 "uuid": "0eca3000-d7c5-5181-a1d6-c402bab42731", 00:10:53.886 "is_configured": true, 00:10:53.886 "data_offset": 2048, 00:10:53.886 "data_size": 63488 00:10:53.886 }, 00:10:53.886 { 00:10:53.886 "name": "BaseBdev2", 00:10:53.886 "uuid": "9868247c-3171-5cb4-a736-8d8a74441a6c", 00:10:53.886 "is_configured": true, 00:10:53.886 "data_offset": 2048, 00:10:53.886 "data_size": 63488 00:10:53.886 }, 00:10:53.886 { 00:10:53.886 "name": "BaseBdev3", 00:10:53.886 "uuid": "6fa9cf82-09c7-591c-b2cd-038399e0528a", 00:10:53.886 "is_configured": true, 00:10:53.886 "data_offset": 2048, 00:10:53.886 "data_size": 63488 00:10:53.886 }, 00:10:53.886 { 00:10:53.886 "name": "BaseBdev4", 00:10:53.886 "uuid": "d79a3aab-b41f-5a15-a122-525510442b40", 00:10:53.886 "is_configured": true, 00:10:53.886 "data_offset": 2048, 00:10:53.886 "data_size": 63488 00:10:53.886 } 00:10:53.886 ] 00:10:53.886 }' 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.886 02:44:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.152 02:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:54.152 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.152 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.152 [2024-12-07 02:44:05.193914] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.152 [2024-12-07 02:44:05.194010] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.152 [2024-12-07 02:44:05.196468] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.152 [2024-12-07 02:44:05.196566] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.152 [2024-12-07 02:44:05.196671] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.152 [2024-12-07 02:44:05.196719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:10:54.152 { 00:10:54.152 "results": [ 00:10:54.152 { 00:10:54.152 "job": "raid_bdev1", 00:10:54.152 "core_mask": "0x1", 00:10:54.152 "workload": "randrw", 00:10:54.152 "percentage": 50, 00:10:54.152 "status": "finished", 00:10:54.152 "queue_depth": 1, 00:10:54.152 "io_size": 131072, 00:10:54.152 "runtime": 1.381948, 00:10:54.152 "iops": 14651.057782203094, 00:10:54.152 "mibps": 1831.3822227753867, 00:10:54.152 "io_failed": 1, 00:10:54.152 "io_timeout": 0, 00:10:54.152 "avg_latency_us": 96.0086294144745, 00:10:54.152 "min_latency_us": 25.152838427947597, 00:10:54.152 "max_latency_us": 1373.6803493449781 00:10:54.152 } 00:10:54.152 ], 00:10:54.152 "core_count": 1 00:10:54.152 } 00:10:54.152 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.153 02:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 84087 00:10:54.153 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 84087 ']' 00:10:54.153 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 84087 00:10:54.153 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:54.153 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.153 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84087 00:10:54.414 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:54.414 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:54.414 killing process with pid 84087 00:10:54.414 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84087' 00:10:54.414 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 84087 00:10:54.414 [2024-12-07 02:44:05.243510] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.414 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 84087 00:10:54.414 [2024-12-07 02:44:05.310622] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:54.673 02:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:54.673 02:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nloWnOe5yJ 00:10:54.673 02:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:54.673 02:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:54.673 02:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:54.673 02:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:54.673 02:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:54.673 02:44:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:54.673 00:10:54.673 real 0m3.538s 00:10:54.673 user 0m4.250s 00:10:54.673 sys 0m0.674s 00:10:54.673 ************************************ 00:10:54.673 END TEST raid_write_error_test 00:10:54.673 ************************************ 00:10:54.673 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.673 02:44:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.673 02:44:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:54.673 02:44:05 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:54.673 02:44:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:54.673 02:44:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.673 02:44:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:54.933 ************************************ 00:10:54.933 START TEST raid_state_function_test 00:10:54.933 ************************************ 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:54.933 Process raid pid: 84225 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84225 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84225' 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84225 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 84225 ']' 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.933 02:44:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.933 [2024-12-07 02:44:05.862907] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:10:54.933 [2024-12-07 02:44:05.863112] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.193 [2024-12-07 02:44:06.029245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.193 [2024-12-07 02:44:06.099040] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.193 [2024-12-07 02:44:06.176334] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.193 [2024-12-07 02:44:06.176375] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.761 02:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.761 02:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:55.761 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:55.761 02:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.761 02:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.761 [2024-12-07 02:44:06.679770] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:55.761 [2024-12-07 02:44:06.679823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:55.761 [2024-12-07 02:44:06.679836] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:55.761 [2024-12-07 02:44:06.679847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:55.762 [2024-12-07 02:44:06.679856] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:55.762 [2024-12-07 02:44:06.679869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:55.762 [2024-12-07 02:44:06.679875] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:55.762 [2024-12-07 02:44:06.679885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.762 "name": "Existed_Raid", 00:10:55.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.762 "strip_size_kb": 0, 00:10:55.762 "state": "configuring", 00:10:55.762 "raid_level": "raid1", 00:10:55.762 "superblock": false, 00:10:55.762 "num_base_bdevs": 4, 00:10:55.762 "num_base_bdevs_discovered": 0, 00:10:55.762 "num_base_bdevs_operational": 4, 00:10:55.762 "base_bdevs_list": [ 00:10:55.762 { 00:10:55.762 "name": "BaseBdev1", 00:10:55.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.762 "is_configured": false, 00:10:55.762 "data_offset": 0, 00:10:55.762 "data_size": 0 00:10:55.762 }, 00:10:55.762 { 00:10:55.762 "name": "BaseBdev2", 00:10:55.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.762 "is_configured": false, 00:10:55.762 "data_offset": 0, 00:10:55.762 "data_size": 0 00:10:55.762 }, 00:10:55.762 { 00:10:55.762 "name": "BaseBdev3", 00:10:55.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.762 "is_configured": false, 00:10:55.762 "data_offset": 0, 00:10:55.762 "data_size": 0 00:10:55.762 }, 00:10:55.762 { 00:10:55.762 "name": "BaseBdev4", 00:10:55.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.762 "is_configured": false, 00:10:55.762 "data_offset": 0, 00:10:55.762 "data_size": 0 00:10:55.762 } 00:10:55.762 ] 00:10:55.762 }' 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.762 02:44:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.021 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:56.021 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.021 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.021 [2024-12-07 02:44:07.086973] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.021 [2024-12-07 02:44:07.087066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:10:56.021 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.021 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:56.021 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.021 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.281 [2024-12-07 02:44:07.098998] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.281 [2024-12-07 02:44:07.099075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.281 [2024-12-07 02:44:07.099102] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.281 [2024-12-07 02:44:07.099126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.281 [2024-12-07 02:44:07.099144] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:56.281 [2024-12-07 02:44:07.099166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:56.281 [2024-12-07 02:44:07.099184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:56.281 [2024-12-07 02:44:07.099206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.281 [2024-12-07 02:44:07.126148] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.281 BaseBdev1 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.281 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.282 [ 00:10:56.282 { 00:10:56.282 "name": "BaseBdev1", 00:10:56.282 "aliases": [ 00:10:56.282 "ddc99092-84ba-4f00-9f5b-9afb8ddedca4" 00:10:56.282 ], 00:10:56.282 "product_name": "Malloc disk", 00:10:56.282 "block_size": 512, 00:10:56.282 "num_blocks": 65536, 00:10:56.282 "uuid": "ddc99092-84ba-4f00-9f5b-9afb8ddedca4", 00:10:56.282 "assigned_rate_limits": { 00:10:56.282 "rw_ios_per_sec": 0, 00:10:56.282 "rw_mbytes_per_sec": 0, 00:10:56.282 "r_mbytes_per_sec": 0, 00:10:56.282 "w_mbytes_per_sec": 0 00:10:56.282 }, 00:10:56.282 "claimed": true, 00:10:56.282 "claim_type": "exclusive_write", 00:10:56.282 "zoned": false, 00:10:56.282 "supported_io_types": { 00:10:56.282 "read": true, 00:10:56.282 "write": true, 00:10:56.282 "unmap": true, 00:10:56.282 "flush": true, 00:10:56.282 "reset": true, 00:10:56.282 "nvme_admin": false, 00:10:56.282 "nvme_io": false, 00:10:56.282 "nvme_io_md": false, 00:10:56.282 "write_zeroes": true, 00:10:56.282 "zcopy": true, 00:10:56.282 "get_zone_info": false, 00:10:56.282 "zone_management": false, 00:10:56.282 "zone_append": false, 00:10:56.282 "compare": false, 00:10:56.282 "compare_and_write": false, 00:10:56.282 "abort": true, 00:10:56.282 "seek_hole": false, 00:10:56.282 "seek_data": false, 00:10:56.282 "copy": true, 00:10:56.282 "nvme_iov_md": false 00:10:56.282 }, 00:10:56.282 "memory_domains": [ 00:10:56.282 { 00:10:56.282 "dma_device_id": "system", 00:10:56.282 "dma_device_type": 1 00:10:56.282 }, 00:10:56.282 { 00:10:56.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.282 "dma_device_type": 2 00:10:56.282 } 00:10:56.282 ], 00:10:56.282 "driver_specific": {} 00:10:56.282 } 00:10:56.282 ] 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.282 "name": "Existed_Raid", 00:10:56.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.282 "strip_size_kb": 0, 00:10:56.282 "state": "configuring", 00:10:56.282 "raid_level": "raid1", 00:10:56.282 "superblock": false, 00:10:56.282 "num_base_bdevs": 4, 00:10:56.282 "num_base_bdevs_discovered": 1, 00:10:56.282 "num_base_bdevs_operational": 4, 00:10:56.282 "base_bdevs_list": [ 00:10:56.282 { 00:10:56.282 "name": "BaseBdev1", 00:10:56.282 "uuid": "ddc99092-84ba-4f00-9f5b-9afb8ddedca4", 00:10:56.282 "is_configured": true, 00:10:56.282 "data_offset": 0, 00:10:56.282 "data_size": 65536 00:10:56.282 }, 00:10:56.282 { 00:10:56.282 "name": "BaseBdev2", 00:10:56.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.282 "is_configured": false, 00:10:56.282 "data_offset": 0, 00:10:56.282 "data_size": 0 00:10:56.282 }, 00:10:56.282 { 00:10:56.282 "name": "BaseBdev3", 00:10:56.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.282 "is_configured": false, 00:10:56.282 "data_offset": 0, 00:10:56.282 "data_size": 0 00:10:56.282 }, 00:10:56.282 { 00:10:56.282 "name": "BaseBdev4", 00:10:56.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.282 "is_configured": false, 00:10:56.282 "data_offset": 0, 00:10:56.282 "data_size": 0 00:10:56.282 } 00:10:56.282 ] 00:10:56.282 }' 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.282 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.573 [2024-12-07 02:44:07.561463] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.573 [2024-12-07 02:44:07.561526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.573 [2024-12-07 02:44:07.573484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.573 [2024-12-07 02:44:07.575707] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.573 [2024-12-07 02:44:07.575794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.573 [2024-12-07 02:44:07.575822] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:56.573 [2024-12-07 02:44:07.575844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:56.573 [2024-12-07 02:44:07.575862] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:56.573 [2024-12-07 02:44:07.575883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.573 "name": "Existed_Raid", 00:10:56.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.573 "strip_size_kb": 0, 00:10:56.573 "state": "configuring", 00:10:56.573 "raid_level": "raid1", 00:10:56.573 "superblock": false, 00:10:56.573 "num_base_bdevs": 4, 00:10:56.573 "num_base_bdevs_discovered": 1, 00:10:56.573 "num_base_bdevs_operational": 4, 00:10:56.573 "base_bdevs_list": [ 00:10:56.573 { 00:10:56.573 "name": "BaseBdev1", 00:10:56.573 "uuid": "ddc99092-84ba-4f00-9f5b-9afb8ddedca4", 00:10:56.573 "is_configured": true, 00:10:56.573 "data_offset": 0, 00:10:56.573 "data_size": 65536 00:10:56.573 }, 00:10:56.573 { 00:10:56.573 "name": "BaseBdev2", 00:10:56.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.573 "is_configured": false, 00:10:56.573 "data_offset": 0, 00:10:56.573 "data_size": 0 00:10:56.573 }, 00:10:56.573 { 00:10:56.573 "name": "BaseBdev3", 00:10:56.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.573 "is_configured": false, 00:10:56.573 "data_offset": 0, 00:10:56.573 "data_size": 0 00:10:56.573 }, 00:10:56.573 { 00:10:56.573 "name": "BaseBdev4", 00:10:56.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.573 "is_configured": false, 00:10:56.573 "data_offset": 0, 00:10:56.573 "data_size": 0 00:10:56.573 } 00:10:56.573 ] 00:10:56.573 }' 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.573 02:44:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.142 [2024-12-07 02:44:08.044732] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.142 BaseBdev2 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.142 [ 00:10:57.142 { 00:10:57.142 "name": "BaseBdev2", 00:10:57.142 "aliases": [ 00:10:57.142 "b35f099a-bbc9-4338-9011-99725ba07abd" 00:10:57.142 ], 00:10:57.142 "product_name": "Malloc disk", 00:10:57.142 "block_size": 512, 00:10:57.142 "num_blocks": 65536, 00:10:57.142 "uuid": "b35f099a-bbc9-4338-9011-99725ba07abd", 00:10:57.142 "assigned_rate_limits": { 00:10:57.142 "rw_ios_per_sec": 0, 00:10:57.142 "rw_mbytes_per_sec": 0, 00:10:57.142 "r_mbytes_per_sec": 0, 00:10:57.142 "w_mbytes_per_sec": 0 00:10:57.142 }, 00:10:57.142 "claimed": true, 00:10:57.142 "claim_type": "exclusive_write", 00:10:57.142 "zoned": false, 00:10:57.142 "supported_io_types": { 00:10:57.142 "read": true, 00:10:57.142 "write": true, 00:10:57.142 "unmap": true, 00:10:57.142 "flush": true, 00:10:57.142 "reset": true, 00:10:57.142 "nvme_admin": false, 00:10:57.142 "nvme_io": false, 00:10:57.142 "nvme_io_md": false, 00:10:57.142 "write_zeroes": true, 00:10:57.142 "zcopy": true, 00:10:57.142 "get_zone_info": false, 00:10:57.142 "zone_management": false, 00:10:57.142 "zone_append": false, 00:10:57.142 "compare": false, 00:10:57.142 "compare_and_write": false, 00:10:57.142 "abort": true, 00:10:57.142 "seek_hole": false, 00:10:57.142 "seek_data": false, 00:10:57.142 "copy": true, 00:10:57.142 "nvme_iov_md": false 00:10:57.142 }, 00:10:57.142 "memory_domains": [ 00:10:57.142 { 00:10:57.142 "dma_device_id": "system", 00:10:57.142 "dma_device_type": 1 00:10:57.142 }, 00:10:57.142 { 00:10:57.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.142 "dma_device_type": 2 00:10:57.142 } 00:10:57.142 ], 00:10:57.142 "driver_specific": {} 00:10:57.142 } 00:10:57.142 ] 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.142 "name": "Existed_Raid", 00:10:57.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.142 "strip_size_kb": 0, 00:10:57.142 "state": "configuring", 00:10:57.142 "raid_level": "raid1", 00:10:57.142 "superblock": false, 00:10:57.142 "num_base_bdevs": 4, 00:10:57.142 "num_base_bdevs_discovered": 2, 00:10:57.142 "num_base_bdevs_operational": 4, 00:10:57.142 "base_bdevs_list": [ 00:10:57.142 { 00:10:57.142 "name": "BaseBdev1", 00:10:57.142 "uuid": "ddc99092-84ba-4f00-9f5b-9afb8ddedca4", 00:10:57.142 "is_configured": true, 00:10:57.142 "data_offset": 0, 00:10:57.142 "data_size": 65536 00:10:57.142 }, 00:10:57.142 { 00:10:57.142 "name": "BaseBdev2", 00:10:57.142 "uuid": "b35f099a-bbc9-4338-9011-99725ba07abd", 00:10:57.142 "is_configured": true, 00:10:57.142 "data_offset": 0, 00:10:57.142 "data_size": 65536 00:10:57.142 }, 00:10:57.142 { 00:10:57.142 "name": "BaseBdev3", 00:10:57.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.142 "is_configured": false, 00:10:57.142 "data_offset": 0, 00:10:57.142 "data_size": 0 00:10:57.142 }, 00:10:57.142 { 00:10:57.142 "name": "BaseBdev4", 00:10:57.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.142 "is_configured": false, 00:10:57.142 "data_offset": 0, 00:10:57.142 "data_size": 0 00:10:57.142 } 00:10:57.142 ] 00:10:57.142 }' 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.142 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.710 [2024-12-07 02:44:08.548699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.710 BaseBdev3 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.710 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.710 [ 00:10:57.710 { 00:10:57.710 "name": "BaseBdev3", 00:10:57.710 "aliases": [ 00:10:57.710 "0d00b7fb-af1e-4289-8ca6-4167caa495b2" 00:10:57.710 ], 00:10:57.710 "product_name": "Malloc disk", 00:10:57.710 "block_size": 512, 00:10:57.710 "num_blocks": 65536, 00:10:57.710 "uuid": "0d00b7fb-af1e-4289-8ca6-4167caa495b2", 00:10:57.710 "assigned_rate_limits": { 00:10:57.710 "rw_ios_per_sec": 0, 00:10:57.710 "rw_mbytes_per_sec": 0, 00:10:57.710 "r_mbytes_per_sec": 0, 00:10:57.710 "w_mbytes_per_sec": 0 00:10:57.710 }, 00:10:57.710 "claimed": true, 00:10:57.710 "claim_type": "exclusive_write", 00:10:57.710 "zoned": false, 00:10:57.710 "supported_io_types": { 00:10:57.710 "read": true, 00:10:57.710 "write": true, 00:10:57.710 "unmap": true, 00:10:57.710 "flush": true, 00:10:57.710 "reset": true, 00:10:57.710 "nvme_admin": false, 00:10:57.710 "nvme_io": false, 00:10:57.710 "nvme_io_md": false, 00:10:57.710 "write_zeroes": true, 00:10:57.710 "zcopy": true, 00:10:57.710 "get_zone_info": false, 00:10:57.710 "zone_management": false, 00:10:57.710 "zone_append": false, 00:10:57.710 "compare": false, 00:10:57.710 "compare_and_write": false, 00:10:57.710 "abort": true, 00:10:57.710 "seek_hole": false, 00:10:57.710 "seek_data": false, 00:10:57.710 "copy": true, 00:10:57.710 "nvme_iov_md": false 00:10:57.710 }, 00:10:57.710 "memory_domains": [ 00:10:57.710 { 00:10:57.710 "dma_device_id": "system", 00:10:57.710 "dma_device_type": 1 00:10:57.710 }, 00:10:57.710 { 00:10:57.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.710 "dma_device_type": 2 00:10:57.710 } 00:10:57.710 ], 00:10:57.710 "driver_specific": {} 00:10:57.711 } 00:10:57.711 ] 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.711 "name": "Existed_Raid", 00:10:57.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.711 "strip_size_kb": 0, 00:10:57.711 "state": "configuring", 00:10:57.711 "raid_level": "raid1", 00:10:57.711 "superblock": false, 00:10:57.711 "num_base_bdevs": 4, 00:10:57.711 "num_base_bdevs_discovered": 3, 00:10:57.711 "num_base_bdevs_operational": 4, 00:10:57.711 "base_bdevs_list": [ 00:10:57.711 { 00:10:57.711 "name": "BaseBdev1", 00:10:57.711 "uuid": "ddc99092-84ba-4f00-9f5b-9afb8ddedca4", 00:10:57.711 "is_configured": true, 00:10:57.711 "data_offset": 0, 00:10:57.711 "data_size": 65536 00:10:57.711 }, 00:10:57.711 { 00:10:57.711 "name": "BaseBdev2", 00:10:57.711 "uuid": "b35f099a-bbc9-4338-9011-99725ba07abd", 00:10:57.711 "is_configured": true, 00:10:57.711 "data_offset": 0, 00:10:57.711 "data_size": 65536 00:10:57.711 }, 00:10:57.711 { 00:10:57.711 "name": "BaseBdev3", 00:10:57.711 "uuid": "0d00b7fb-af1e-4289-8ca6-4167caa495b2", 00:10:57.711 "is_configured": true, 00:10:57.711 "data_offset": 0, 00:10:57.711 "data_size": 65536 00:10:57.711 }, 00:10:57.711 { 00:10:57.711 "name": "BaseBdev4", 00:10:57.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.711 "is_configured": false, 00:10:57.711 "data_offset": 0, 00:10:57.711 "data_size": 0 00:10:57.711 } 00:10:57.711 ] 00:10:57.711 }' 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.711 02:44:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.970 [2024-12-07 02:44:09.024757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.970 [2024-12-07 02:44:09.024808] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:10:57.970 [2024-12-07 02:44:09.024816] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:57.970 [2024-12-07 02:44:09.025120] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:57.970 [2024-12-07 02:44:09.025286] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:10:57.970 [2024-12-07 02:44:09.025305] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:10:57.970 [2024-12-07 02:44:09.025505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.970 BaseBdev4 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.970 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 [ 00:10:58.229 { 00:10:58.229 "name": "BaseBdev4", 00:10:58.229 "aliases": [ 00:10:58.229 "60f81b1c-1785-4d88-854c-5eba934f13bd" 00:10:58.229 ], 00:10:58.229 "product_name": "Malloc disk", 00:10:58.229 "block_size": 512, 00:10:58.229 "num_blocks": 65536, 00:10:58.229 "uuid": "60f81b1c-1785-4d88-854c-5eba934f13bd", 00:10:58.229 "assigned_rate_limits": { 00:10:58.229 "rw_ios_per_sec": 0, 00:10:58.229 "rw_mbytes_per_sec": 0, 00:10:58.229 "r_mbytes_per_sec": 0, 00:10:58.229 "w_mbytes_per_sec": 0 00:10:58.229 }, 00:10:58.229 "claimed": true, 00:10:58.229 "claim_type": "exclusive_write", 00:10:58.229 "zoned": false, 00:10:58.229 "supported_io_types": { 00:10:58.229 "read": true, 00:10:58.229 "write": true, 00:10:58.229 "unmap": true, 00:10:58.229 "flush": true, 00:10:58.229 "reset": true, 00:10:58.229 "nvme_admin": false, 00:10:58.229 "nvme_io": false, 00:10:58.229 "nvme_io_md": false, 00:10:58.229 "write_zeroes": true, 00:10:58.229 "zcopy": true, 00:10:58.229 "get_zone_info": false, 00:10:58.229 "zone_management": false, 00:10:58.229 "zone_append": false, 00:10:58.229 "compare": false, 00:10:58.229 "compare_and_write": false, 00:10:58.229 "abort": true, 00:10:58.229 "seek_hole": false, 00:10:58.229 "seek_data": false, 00:10:58.229 "copy": true, 00:10:58.229 "nvme_iov_md": false 00:10:58.229 }, 00:10:58.229 "memory_domains": [ 00:10:58.229 { 00:10:58.229 "dma_device_id": "system", 00:10:58.229 "dma_device_type": 1 00:10:58.229 }, 00:10:58.229 { 00:10:58.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.229 "dma_device_type": 2 00:10:58.229 } 00:10:58.229 ], 00:10:58.229 "driver_specific": {} 00:10:58.229 } 00:10:58.229 ] 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.229 "name": "Existed_Raid", 00:10:58.229 "uuid": "eaad34ef-2a28-4f3c-af18-a6a0a6d974c6", 00:10:58.229 "strip_size_kb": 0, 00:10:58.229 "state": "online", 00:10:58.229 "raid_level": "raid1", 00:10:58.229 "superblock": false, 00:10:58.229 "num_base_bdevs": 4, 00:10:58.229 "num_base_bdevs_discovered": 4, 00:10:58.229 "num_base_bdevs_operational": 4, 00:10:58.229 "base_bdevs_list": [ 00:10:58.229 { 00:10:58.229 "name": "BaseBdev1", 00:10:58.229 "uuid": "ddc99092-84ba-4f00-9f5b-9afb8ddedca4", 00:10:58.229 "is_configured": true, 00:10:58.229 "data_offset": 0, 00:10:58.229 "data_size": 65536 00:10:58.229 }, 00:10:58.229 { 00:10:58.229 "name": "BaseBdev2", 00:10:58.229 "uuid": "b35f099a-bbc9-4338-9011-99725ba07abd", 00:10:58.229 "is_configured": true, 00:10:58.229 "data_offset": 0, 00:10:58.229 "data_size": 65536 00:10:58.229 }, 00:10:58.229 { 00:10:58.229 "name": "BaseBdev3", 00:10:58.229 "uuid": "0d00b7fb-af1e-4289-8ca6-4167caa495b2", 00:10:58.229 "is_configured": true, 00:10:58.229 "data_offset": 0, 00:10:58.229 "data_size": 65536 00:10:58.229 }, 00:10:58.229 { 00:10:58.229 "name": "BaseBdev4", 00:10:58.229 "uuid": "60f81b1c-1785-4d88-854c-5eba934f13bd", 00:10:58.229 "is_configured": true, 00:10:58.229 "data_offset": 0, 00:10:58.229 "data_size": 65536 00:10:58.229 } 00:10:58.229 ] 00:10:58.229 }' 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.229 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:58.489 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:58.489 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.489 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.489 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.489 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.489 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:58.489 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.489 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.489 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.489 [2024-12-07 02:44:09.520317] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.489 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.489 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:58.489 "name": "Existed_Raid", 00:10:58.489 "aliases": [ 00:10:58.489 "eaad34ef-2a28-4f3c-af18-a6a0a6d974c6" 00:10:58.489 ], 00:10:58.489 "product_name": "Raid Volume", 00:10:58.489 "block_size": 512, 00:10:58.489 "num_blocks": 65536, 00:10:58.489 "uuid": "eaad34ef-2a28-4f3c-af18-a6a0a6d974c6", 00:10:58.489 "assigned_rate_limits": { 00:10:58.489 "rw_ios_per_sec": 0, 00:10:58.489 "rw_mbytes_per_sec": 0, 00:10:58.489 "r_mbytes_per_sec": 0, 00:10:58.489 "w_mbytes_per_sec": 0 00:10:58.489 }, 00:10:58.489 "claimed": false, 00:10:58.489 "zoned": false, 00:10:58.489 "supported_io_types": { 00:10:58.489 "read": true, 00:10:58.489 "write": true, 00:10:58.489 "unmap": false, 00:10:58.489 "flush": false, 00:10:58.489 "reset": true, 00:10:58.489 "nvme_admin": false, 00:10:58.489 "nvme_io": false, 00:10:58.489 "nvme_io_md": false, 00:10:58.489 "write_zeroes": true, 00:10:58.489 "zcopy": false, 00:10:58.489 "get_zone_info": false, 00:10:58.489 "zone_management": false, 00:10:58.489 "zone_append": false, 00:10:58.489 "compare": false, 00:10:58.489 "compare_and_write": false, 00:10:58.489 "abort": false, 00:10:58.489 "seek_hole": false, 00:10:58.489 "seek_data": false, 00:10:58.489 "copy": false, 00:10:58.489 "nvme_iov_md": false 00:10:58.489 }, 00:10:58.489 "memory_domains": [ 00:10:58.489 { 00:10:58.489 "dma_device_id": "system", 00:10:58.489 "dma_device_type": 1 00:10:58.489 }, 00:10:58.489 { 00:10:58.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.489 "dma_device_type": 2 00:10:58.489 }, 00:10:58.489 { 00:10:58.489 "dma_device_id": "system", 00:10:58.489 "dma_device_type": 1 00:10:58.489 }, 00:10:58.489 { 00:10:58.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.489 "dma_device_type": 2 00:10:58.489 }, 00:10:58.489 { 00:10:58.489 "dma_device_id": "system", 00:10:58.489 "dma_device_type": 1 00:10:58.489 }, 00:10:58.489 { 00:10:58.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.489 "dma_device_type": 2 00:10:58.489 }, 00:10:58.489 { 00:10:58.489 "dma_device_id": "system", 00:10:58.489 "dma_device_type": 1 00:10:58.489 }, 00:10:58.489 { 00:10:58.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.489 "dma_device_type": 2 00:10:58.489 } 00:10:58.489 ], 00:10:58.489 "driver_specific": { 00:10:58.489 "raid": { 00:10:58.489 "uuid": "eaad34ef-2a28-4f3c-af18-a6a0a6d974c6", 00:10:58.489 "strip_size_kb": 0, 00:10:58.489 "state": "online", 00:10:58.489 "raid_level": "raid1", 00:10:58.489 "superblock": false, 00:10:58.489 "num_base_bdevs": 4, 00:10:58.489 "num_base_bdevs_discovered": 4, 00:10:58.489 "num_base_bdevs_operational": 4, 00:10:58.489 "base_bdevs_list": [ 00:10:58.489 { 00:10:58.489 "name": "BaseBdev1", 00:10:58.489 "uuid": "ddc99092-84ba-4f00-9f5b-9afb8ddedca4", 00:10:58.489 "is_configured": true, 00:10:58.489 "data_offset": 0, 00:10:58.489 "data_size": 65536 00:10:58.489 }, 00:10:58.489 { 00:10:58.489 "name": "BaseBdev2", 00:10:58.489 "uuid": "b35f099a-bbc9-4338-9011-99725ba07abd", 00:10:58.489 "is_configured": true, 00:10:58.489 "data_offset": 0, 00:10:58.489 "data_size": 65536 00:10:58.489 }, 00:10:58.489 { 00:10:58.489 "name": "BaseBdev3", 00:10:58.489 "uuid": "0d00b7fb-af1e-4289-8ca6-4167caa495b2", 00:10:58.489 "is_configured": true, 00:10:58.489 "data_offset": 0, 00:10:58.489 "data_size": 65536 00:10:58.489 }, 00:10:58.489 { 00:10:58.489 "name": "BaseBdev4", 00:10:58.489 "uuid": "60f81b1c-1785-4d88-854c-5eba934f13bd", 00:10:58.489 "is_configured": true, 00:10:58.489 "data_offset": 0, 00:10:58.489 "data_size": 65536 00:10:58.489 } 00:10:58.489 ] 00:10:58.489 } 00:10:58.489 } 00:10:58.489 }' 00:10:58.489 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.747 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:58.747 BaseBdev2 00:10:58.747 BaseBdev3 00:10:58.747 BaseBdev4' 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.748 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.008 [2024-12-07 02:44:09.839488] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.008 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.008 "name": "Existed_Raid", 00:10:59.008 "uuid": "eaad34ef-2a28-4f3c-af18-a6a0a6d974c6", 00:10:59.008 "strip_size_kb": 0, 00:10:59.008 "state": "online", 00:10:59.008 "raid_level": "raid1", 00:10:59.008 "superblock": false, 00:10:59.008 "num_base_bdevs": 4, 00:10:59.008 "num_base_bdevs_discovered": 3, 00:10:59.008 "num_base_bdevs_operational": 3, 00:10:59.008 "base_bdevs_list": [ 00:10:59.008 { 00:10:59.008 "name": null, 00:10:59.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.008 "is_configured": false, 00:10:59.008 "data_offset": 0, 00:10:59.008 "data_size": 65536 00:10:59.009 }, 00:10:59.009 { 00:10:59.009 "name": "BaseBdev2", 00:10:59.009 "uuid": "b35f099a-bbc9-4338-9011-99725ba07abd", 00:10:59.009 "is_configured": true, 00:10:59.009 "data_offset": 0, 00:10:59.009 "data_size": 65536 00:10:59.009 }, 00:10:59.009 { 00:10:59.009 "name": "BaseBdev3", 00:10:59.009 "uuid": "0d00b7fb-af1e-4289-8ca6-4167caa495b2", 00:10:59.009 "is_configured": true, 00:10:59.009 "data_offset": 0, 00:10:59.009 "data_size": 65536 00:10:59.009 }, 00:10:59.009 { 00:10:59.009 "name": "BaseBdev4", 00:10:59.009 "uuid": "60f81b1c-1785-4d88-854c-5eba934f13bd", 00:10:59.009 "is_configured": true, 00:10:59.009 "data_offset": 0, 00:10:59.009 "data_size": 65536 00:10:59.009 } 00:10:59.009 ] 00:10:59.009 }' 00:10:59.009 02:44:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.009 02:44:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.272 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:59.272 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.272 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.272 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.272 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.272 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.272 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.534 [2024-12-07 02:44:10.359342] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.534 [2024-12-07 02:44:10.435733] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.534 [2024-12-07 02:44:10.516082] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:59.534 [2024-12-07 02:44:10.516229] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.534 [2024-12-07 02:44:10.536895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.534 [2024-12-07 02:44:10.537010] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.534 [2024-12-07 02:44:10.537069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.534 BaseBdev2 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.534 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.794 [ 00:10:59.794 { 00:10:59.794 "name": "BaseBdev2", 00:10:59.794 "aliases": [ 00:10:59.794 "09649802-00a4-418e-8208-98399e478aa3" 00:10:59.794 ], 00:10:59.794 "product_name": "Malloc disk", 00:10:59.794 "block_size": 512, 00:10:59.794 "num_blocks": 65536, 00:10:59.794 "uuid": "09649802-00a4-418e-8208-98399e478aa3", 00:10:59.794 "assigned_rate_limits": { 00:10:59.794 "rw_ios_per_sec": 0, 00:10:59.794 "rw_mbytes_per_sec": 0, 00:10:59.794 "r_mbytes_per_sec": 0, 00:10:59.794 "w_mbytes_per_sec": 0 00:10:59.794 }, 00:10:59.794 "claimed": false, 00:10:59.794 "zoned": false, 00:10:59.794 "supported_io_types": { 00:10:59.794 "read": true, 00:10:59.794 "write": true, 00:10:59.794 "unmap": true, 00:10:59.794 "flush": true, 00:10:59.794 "reset": true, 00:10:59.794 "nvme_admin": false, 00:10:59.794 "nvme_io": false, 00:10:59.794 "nvme_io_md": false, 00:10:59.794 "write_zeroes": true, 00:10:59.794 "zcopy": true, 00:10:59.794 "get_zone_info": false, 00:10:59.794 "zone_management": false, 00:10:59.794 "zone_append": false, 00:10:59.794 "compare": false, 00:10:59.794 "compare_and_write": false, 00:10:59.794 "abort": true, 00:10:59.794 "seek_hole": false, 00:10:59.794 "seek_data": false, 00:10:59.794 "copy": true, 00:10:59.794 "nvme_iov_md": false 00:10:59.794 }, 00:10:59.794 "memory_domains": [ 00:10:59.794 { 00:10:59.794 "dma_device_id": "system", 00:10:59.794 "dma_device_type": 1 00:10:59.794 }, 00:10:59.794 { 00:10:59.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.794 "dma_device_type": 2 00:10:59.794 } 00:10:59.794 ], 00:10:59.794 "driver_specific": {} 00:10:59.794 } 00:10:59.794 ] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.794 BaseBdev3 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.794 [ 00:10:59.794 { 00:10:59.794 "name": "BaseBdev3", 00:10:59.794 "aliases": [ 00:10:59.794 "b917c088-bd1a-4048-b969-73aaea744520" 00:10:59.794 ], 00:10:59.794 "product_name": "Malloc disk", 00:10:59.794 "block_size": 512, 00:10:59.794 "num_blocks": 65536, 00:10:59.794 "uuid": "b917c088-bd1a-4048-b969-73aaea744520", 00:10:59.794 "assigned_rate_limits": { 00:10:59.794 "rw_ios_per_sec": 0, 00:10:59.794 "rw_mbytes_per_sec": 0, 00:10:59.794 "r_mbytes_per_sec": 0, 00:10:59.794 "w_mbytes_per_sec": 0 00:10:59.794 }, 00:10:59.794 "claimed": false, 00:10:59.794 "zoned": false, 00:10:59.794 "supported_io_types": { 00:10:59.794 "read": true, 00:10:59.794 "write": true, 00:10:59.794 "unmap": true, 00:10:59.794 "flush": true, 00:10:59.794 "reset": true, 00:10:59.794 "nvme_admin": false, 00:10:59.794 "nvme_io": false, 00:10:59.794 "nvme_io_md": false, 00:10:59.794 "write_zeroes": true, 00:10:59.794 "zcopy": true, 00:10:59.794 "get_zone_info": false, 00:10:59.794 "zone_management": false, 00:10:59.794 "zone_append": false, 00:10:59.794 "compare": false, 00:10:59.794 "compare_and_write": false, 00:10:59.794 "abort": true, 00:10:59.794 "seek_hole": false, 00:10:59.794 "seek_data": false, 00:10:59.794 "copy": true, 00:10:59.794 "nvme_iov_md": false 00:10:59.794 }, 00:10:59.794 "memory_domains": [ 00:10:59.794 { 00:10:59.794 "dma_device_id": "system", 00:10:59.794 "dma_device_type": 1 00:10:59.794 }, 00:10:59.794 { 00:10:59.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.794 "dma_device_type": 2 00:10:59.794 } 00:10:59.794 ], 00:10:59.794 "driver_specific": {} 00:10:59.794 } 00:10:59.794 ] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.794 BaseBdev4 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.794 [ 00:10:59.794 { 00:10:59.794 "name": "BaseBdev4", 00:10:59.794 "aliases": [ 00:10:59.794 "450c1e77-eb06-4b0f-a06b-194c9226ba3d" 00:10:59.794 ], 00:10:59.794 "product_name": "Malloc disk", 00:10:59.794 "block_size": 512, 00:10:59.794 "num_blocks": 65536, 00:10:59.794 "uuid": "450c1e77-eb06-4b0f-a06b-194c9226ba3d", 00:10:59.794 "assigned_rate_limits": { 00:10:59.794 "rw_ios_per_sec": 0, 00:10:59.794 "rw_mbytes_per_sec": 0, 00:10:59.794 "r_mbytes_per_sec": 0, 00:10:59.794 "w_mbytes_per_sec": 0 00:10:59.794 }, 00:10:59.794 "claimed": false, 00:10:59.794 "zoned": false, 00:10:59.794 "supported_io_types": { 00:10:59.794 "read": true, 00:10:59.794 "write": true, 00:10:59.794 "unmap": true, 00:10:59.794 "flush": true, 00:10:59.794 "reset": true, 00:10:59.794 "nvme_admin": false, 00:10:59.794 "nvme_io": false, 00:10:59.794 "nvme_io_md": false, 00:10:59.794 "write_zeroes": true, 00:10:59.794 "zcopy": true, 00:10:59.794 "get_zone_info": false, 00:10:59.794 "zone_management": false, 00:10:59.794 "zone_append": false, 00:10:59.794 "compare": false, 00:10:59.794 "compare_and_write": false, 00:10:59.794 "abort": true, 00:10:59.794 "seek_hole": false, 00:10:59.794 "seek_data": false, 00:10:59.794 "copy": true, 00:10:59.794 "nvme_iov_md": false 00:10:59.794 }, 00:10:59.794 "memory_domains": [ 00:10:59.794 { 00:10:59.794 "dma_device_id": "system", 00:10:59.794 "dma_device_type": 1 00:10:59.794 }, 00:10:59.794 { 00:10:59.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.794 "dma_device_type": 2 00:10:59.794 } 00:10:59.794 ], 00:10:59.794 "driver_specific": {} 00:10:59.794 } 00:10:59.794 ] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.794 [2024-12-07 02:44:10.772080] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:59.794 [2024-12-07 02:44:10.772174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:59.794 [2024-12-07 02:44:10.772219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:59.794 [2024-12-07 02:44:10.774325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.794 [2024-12-07 02:44:10.774406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.794 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.795 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.795 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.795 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.795 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.795 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.795 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.795 "name": "Existed_Raid", 00:10:59.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.795 "strip_size_kb": 0, 00:10:59.795 "state": "configuring", 00:10:59.795 "raid_level": "raid1", 00:10:59.795 "superblock": false, 00:10:59.795 "num_base_bdevs": 4, 00:10:59.795 "num_base_bdevs_discovered": 3, 00:10:59.795 "num_base_bdevs_operational": 4, 00:10:59.795 "base_bdevs_list": [ 00:10:59.795 { 00:10:59.795 "name": "BaseBdev1", 00:10:59.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.795 "is_configured": false, 00:10:59.795 "data_offset": 0, 00:10:59.795 "data_size": 0 00:10:59.795 }, 00:10:59.795 { 00:10:59.795 "name": "BaseBdev2", 00:10:59.795 "uuid": "09649802-00a4-418e-8208-98399e478aa3", 00:10:59.795 "is_configured": true, 00:10:59.795 "data_offset": 0, 00:10:59.795 "data_size": 65536 00:10:59.795 }, 00:10:59.795 { 00:10:59.795 "name": "BaseBdev3", 00:10:59.795 "uuid": "b917c088-bd1a-4048-b969-73aaea744520", 00:10:59.795 "is_configured": true, 00:10:59.795 "data_offset": 0, 00:10:59.795 "data_size": 65536 00:10:59.795 }, 00:10:59.795 { 00:10:59.795 "name": "BaseBdev4", 00:10:59.795 "uuid": "450c1e77-eb06-4b0f-a06b-194c9226ba3d", 00:10:59.795 "is_configured": true, 00:10:59.795 "data_offset": 0, 00:10:59.795 "data_size": 65536 00:10:59.795 } 00:10:59.795 ] 00:10:59.795 }' 00:10:59.795 02:44:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.795 02:44:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.361 [2024-12-07 02:44:11.235265] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.361 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.362 "name": "Existed_Raid", 00:11:00.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.362 "strip_size_kb": 0, 00:11:00.362 "state": "configuring", 00:11:00.362 "raid_level": "raid1", 00:11:00.362 "superblock": false, 00:11:00.362 "num_base_bdevs": 4, 00:11:00.362 "num_base_bdevs_discovered": 2, 00:11:00.362 "num_base_bdevs_operational": 4, 00:11:00.362 "base_bdevs_list": [ 00:11:00.362 { 00:11:00.362 "name": "BaseBdev1", 00:11:00.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.362 "is_configured": false, 00:11:00.362 "data_offset": 0, 00:11:00.362 "data_size": 0 00:11:00.362 }, 00:11:00.362 { 00:11:00.362 "name": null, 00:11:00.362 "uuid": "09649802-00a4-418e-8208-98399e478aa3", 00:11:00.362 "is_configured": false, 00:11:00.362 "data_offset": 0, 00:11:00.362 "data_size": 65536 00:11:00.362 }, 00:11:00.362 { 00:11:00.362 "name": "BaseBdev3", 00:11:00.362 "uuid": "b917c088-bd1a-4048-b969-73aaea744520", 00:11:00.362 "is_configured": true, 00:11:00.362 "data_offset": 0, 00:11:00.362 "data_size": 65536 00:11:00.362 }, 00:11:00.362 { 00:11:00.362 "name": "BaseBdev4", 00:11:00.362 "uuid": "450c1e77-eb06-4b0f-a06b-194c9226ba3d", 00:11:00.362 "is_configured": true, 00:11:00.362 "data_offset": 0, 00:11:00.362 "data_size": 65536 00:11:00.362 } 00:11:00.362 ] 00:11:00.362 }' 00:11:00.362 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.362 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.621 [2024-12-07 02:44:11.691161] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.621 BaseBdev1 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.621 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.881 [ 00:11:00.881 { 00:11:00.881 "name": "BaseBdev1", 00:11:00.881 "aliases": [ 00:11:00.881 "c34af117-536c-4d28-8f88-5a8826615f4c" 00:11:00.881 ], 00:11:00.881 "product_name": "Malloc disk", 00:11:00.881 "block_size": 512, 00:11:00.881 "num_blocks": 65536, 00:11:00.881 "uuid": "c34af117-536c-4d28-8f88-5a8826615f4c", 00:11:00.881 "assigned_rate_limits": { 00:11:00.881 "rw_ios_per_sec": 0, 00:11:00.881 "rw_mbytes_per_sec": 0, 00:11:00.881 "r_mbytes_per_sec": 0, 00:11:00.881 "w_mbytes_per_sec": 0 00:11:00.881 }, 00:11:00.881 "claimed": true, 00:11:00.881 "claim_type": "exclusive_write", 00:11:00.881 "zoned": false, 00:11:00.881 "supported_io_types": { 00:11:00.881 "read": true, 00:11:00.881 "write": true, 00:11:00.881 "unmap": true, 00:11:00.881 "flush": true, 00:11:00.881 "reset": true, 00:11:00.881 "nvme_admin": false, 00:11:00.881 "nvme_io": false, 00:11:00.881 "nvme_io_md": false, 00:11:00.881 "write_zeroes": true, 00:11:00.881 "zcopy": true, 00:11:00.881 "get_zone_info": false, 00:11:00.881 "zone_management": false, 00:11:00.881 "zone_append": false, 00:11:00.881 "compare": false, 00:11:00.881 "compare_and_write": false, 00:11:00.881 "abort": true, 00:11:00.881 "seek_hole": false, 00:11:00.881 "seek_data": false, 00:11:00.881 "copy": true, 00:11:00.881 "nvme_iov_md": false 00:11:00.881 }, 00:11:00.881 "memory_domains": [ 00:11:00.881 { 00:11:00.881 "dma_device_id": "system", 00:11:00.881 "dma_device_type": 1 00:11:00.881 }, 00:11:00.881 { 00:11:00.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.881 "dma_device_type": 2 00:11:00.881 } 00:11:00.881 ], 00:11:00.881 "driver_specific": {} 00:11:00.881 } 00:11:00.881 ] 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.881 "name": "Existed_Raid", 00:11:00.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.881 "strip_size_kb": 0, 00:11:00.881 "state": "configuring", 00:11:00.881 "raid_level": "raid1", 00:11:00.881 "superblock": false, 00:11:00.881 "num_base_bdevs": 4, 00:11:00.881 "num_base_bdevs_discovered": 3, 00:11:00.881 "num_base_bdevs_operational": 4, 00:11:00.881 "base_bdevs_list": [ 00:11:00.881 { 00:11:00.881 "name": "BaseBdev1", 00:11:00.881 "uuid": "c34af117-536c-4d28-8f88-5a8826615f4c", 00:11:00.881 "is_configured": true, 00:11:00.881 "data_offset": 0, 00:11:00.881 "data_size": 65536 00:11:00.881 }, 00:11:00.881 { 00:11:00.881 "name": null, 00:11:00.881 "uuid": "09649802-00a4-418e-8208-98399e478aa3", 00:11:00.881 "is_configured": false, 00:11:00.881 "data_offset": 0, 00:11:00.881 "data_size": 65536 00:11:00.881 }, 00:11:00.881 { 00:11:00.881 "name": "BaseBdev3", 00:11:00.881 "uuid": "b917c088-bd1a-4048-b969-73aaea744520", 00:11:00.881 "is_configured": true, 00:11:00.881 "data_offset": 0, 00:11:00.881 "data_size": 65536 00:11:00.881 }, 00:11:00.881 { 00:11:00.881 "name": "BaseBdev4", 00:11:00.881 "uuid": "450c1e77-eb06-4b0f-a06b-194c9226ba3d", 00:11:00.881 "is_configured": true, 00:11:00.881 "data_offset": 0, 00:11:00.881 "data_size": 65536 00:11:00.881 } 00:11:00.881 ] 00:11:00.881 }' 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.881 02:44:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.141 [2024-12-07 02:44:12.210354] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.141 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.401 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.401 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.401 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.401 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.401 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.401 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.401 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.401 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.401 "name": "Existed_Raid", 00:11:01.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.401 "strip_size_kb": 0, 00:11:01.401 "state": "configuring", 00:11:01.401 "raid_level": "raid1", 00:11:01.401 "superblock": false, 00:11:01.401 "num_base_bdevs": 4, 00:11:01.401 "num_base_bdevs_discovered": 2, 00:11:01.401 "num_base_bdevs_operational": 4, 00:11:01.401 "base_bdevs_list": [ 00:11:01.401 { 00:11:01.401 "name": "BaseBdev1", 00:11:01.401 "uuid": "c34af117-536c-4d28-8f88-5a8826615f4c", 00:11:01.401 "is_configured": true, 00:11:01.401 "data_offset": 0, 00:11:01.401 "data_size": 65536 00:11:01.401 }, 00:11:01.401 { 00:11:01.401 "name": null, 00:11:01.401 "uuid": "09649802-00a4-418e-8208-98399e478aa3", 00:11:01.401 "is_configured": false, 00:11:01.401 "data_offset": 0, 00:11:01.401 "data_size": 65536 00:11:01.401 }, 00:11:01.401 { 00:11:01.401 "name": null, 00:11:01.401 "uuid": "b917c088-bd1a-4048-b969-73aaea744520", 00:11:01.401 "is_configured": false, 00:11:01.401 "data_offset": 0, 00:11:01.401 "data_size": 65536 00:11:01.401 }, 00:11:01.401 { 00:11:01.401 "name": "BaseBdev4", 00:11:01.401 "uuid": "450c1e77-eb06-4b0f-a06b-194c9226ba3d", 00:11:01.401 "is_configured": true, 00:11:01.401 "data_offset": 0, 00:11:01.401 "data_size": 65536 00:11:01.401 } 00:11:01.401 ] 00:11:01.401 }' 00:11:01.401 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.401 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.661 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.662 [2024-12-07 02:44:12.701538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.662 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.922 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.922 "name": "Existed_Raid", 00:11:01.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.922 "strip_size_kb": 0, 00:11:01.922 "state": "configuring", 00:11:01.922 "raid_level": "raid1", 00:11:01.922 "superblock": false, 00:11:01.922 "num_base_bdevs": 4, 00:11:01.922 "num_base_bdevs_discovered": 3, 00:11:01.922 "num_base_bdevs_operational": 4, 00:11:01.922 "base_bdevs_list": [ 00:11:01.922 { 00:11:01.922 "name": "BaseBdev1", 00:11:01.922 "uuid": "c34af117-536c-4d28-8f88-5a8826615f4c", 00:11:01.922 "is_configured": true, 00:11:01.922 "data_offset": 0, 00:11:01.922 "data_size": 65536 00:11:01.922 }, 00:11:01.922 { 00:11:01.922 "name": null, 00:11:01.922 "uuid": "09649802-00a4-418e-8208-98399e478aa3", 00:11:01.922 "is_configured": false, 00:11:01.922 "data_offset": 0, 00:11:01.922 "data_size": 65536 00:11:01.922 }, 00:11:01.922 { 00:11:01.922 "name": "BaseBdev3", 00:11:01.922 "uuid": "b917c088-bd1a-4048-b969-73aaea744520", 00:11:01.922 "is_configured": true, 00:11:01.922 "data_offset": 0, 00:11:01.922 "data_size": 65536 00:11:01.922 }, 00:11:01.922 { 00:11:01.922 "name": "BaseBdev4", 00:11:01.922 "uuid": "450c1e77-eb06-4b0f-a06b-194c9226ba3d", 00:11:01.922 "is_configured": true, 00:11:01.922 "data_offset": 0, 00:11:01.922 "data_size": 65536 00:11:01.922 } 00:11:01.922 ] 00:11:01.922 }' 00:11:01.922 02:44:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.922 02:44:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.182 [2024-12-07 02:44:13.208776] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.182 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.443 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.443 "name": "Existed_Raid", 00:11:02.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.443 "strip_size_kb": 0, 00:11:02.443 "state": "configuring", 00:11:02.443 "raid_level": "raid1", 00:11:02.443 "superblock": false, 00:11:02.443 "num_base_bdevs": 4, 00:11:02.443 "num_base_bdevs_discovered": 2, 00:11:02.443 "num_base_bdevs_operational": 4, 00:11:02.443 "base_bdevs_list": [ 00:11:02.443 { 00:11:02.443 "name": null, 00:11:02.443 "uuid": "c34af117-536c-4d28-8f88-5a8826615f4c", 00:11:02.443 "is_configured": false, 00:11:02.443 "data_offset": 0, 00:11:02.443 "data_size": 65536 00:11:02.443 }, 00:11:02.443 { 00:11:02.443 "name": null, 00:11:02.443 "uuid": "09649802-00a4-418e-8208-98399e478aa3", 00:11:02.443 "is_configured": false, 00:11:02.443 "data_offset": 0, 00:11:02.443 "data_size": 65536 00:11:02.443 }, 00:11:02.443 { 00:11:02.443 "name": "BaseBdev3", 00:11:02.443 "uuid": "b917c088-bd1a-4048-b969-73aaea744520", 00:11:02.443 "is_configured": true, 00:11:02.443 "data_offset": 0, 00:11:02.443 "data_size": 65536 00:11:02.443 }, 00:11:02.443 { 00:11:02.443 "name": "BaseBdev4", 00:11:02.443 "uuid": "450c1e77-eb06-4b0f-a06b-194c9226ba3d", 00:11:02.443 "is_configured": true, 00:11:02.443 "data_offset": 0, 00:11:02.443 "data_size": 65536 00:11:02.443 } 00:11:02.443 ] 00:11:02.443 }' 00:11:02.443 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.443 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.703 [2024-12-07 02:44:13.711606] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.703 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.704 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.704 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.704 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.704 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.704 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.704 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.704 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.704 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.704 "name": "Existed_Raid", 00:11:02.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.704 "strip_size_kb": 0, 00:11:02.704 "state": "configuring", 00:11:02.704 "raid_level": "raid1", 00:11:02.704 "superblock": false, 00:11:02.704 "num_base_bdevs": 4, 00:11:02.704 "num_base_bdevs_discovered": 3, 00:11:02.704 "num_base_bdevs_operational": 4, 00:11:02.704 "base_bdevs_list": [ 00:11:02.704 { 00:11:02.704 "name": null, 00:11:02.704 "uuid": "c34af117-536c-4d28-8f88-5a8826615f4c", 00:11:02.704 "is_configured": false, 00:11:02.704 "data_offset": 0, 00:11:02.704 "data_size": 65536 00:11:02.704 }, 00:11:02.704 { 00:11:02.704 "name": "BaseBdev2", 00:11:02.704 "uuid": "09649802-00a4-418e-8208-98399e478aa3", 00:11:02.704 "is_configured": true, 00:11:02.704 "data_offset": 0, 00:11:02.704 "data_size": 65536 00:11:02.704 }, 00:11:02.704 { 00:11:02.704 "name": "BaseBdev3", 00:11:02.704 "uuid": "b917c088-bd1a-4048-b969-73aaea744520", 00:11:02.704 "is_configured": true, 00:11:02.704 "data_offset": 0, 00:11:02.704 "data_size": 65536 00:11:02.704 }, 00:11:02.704 { 00:11:02.704 "name": "BaseBdev4", 00:11:02.704 "uuid": "450c1e77-eb06-4b0f-a06b-194c9226ba3d", 00:11:02.704 "is_configured": true, 00:11:02.704 "data_offset": 0, 00:11:02.704 "data_size": 65536 00:11:02.704 } 00:11:02.704 ] 00:11:02.704 }' 00:11:02.704 02:44:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.704 02:44:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c34af117-536c-4d28-8f88-5a8826615f4c 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.274 [2024-12-07 02:44:14.211647] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:03.274 [2024-12-07 02:44:14.211699] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:03.274 [2024-12-07 02:44:14.211728] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:03.274 [2024-12-07 02:44:14.212008] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:03.274 [2024-12-07 02:44:14.212167] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:03.274 [2024-12-07 02:44:14.212177] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:03.274 [2024-12-07 02:44:14.212376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.274 NewBaseBdev 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.274 [ 00:11:03.274 { 00:11:03.274 "name": "NewBaseBdev", 00:11:03.274 "aliases": [ 00:11:03.274 "c34af117-536c-4d28-8f88-5a8826615f4c" 00:11:03.274 ], 00:11:03.274 "product_name": "Malloc disk", 00:11:03.274 "block_size": 512, 00:11:03.274 "num_blocks": 65536, 00:11:03.274 "uuid": "c34af117-536c-4d28-8f88-5a8826615f4c", 00:11:03.274 "assigned_rate_limits": { 00:11:03.274 "rw_ios_per_sec": 0, 00:11:03.274 "rw_mbytes_per_sec": 0, 00:11:03.274 "r_mbytes_per_sec": 0, 00:11:03.274 "w_mbytes_per_sec": 0 00:11:03.274 }, 00:11:03.274 "claimed": true, 00:11:03.274 "claim_type": "exclusive_write", 00:11:03.274 "zoned": false, 00:11:03.274 "supported_io_types": { 00:11:03.274 "read": true, 00:11:03.274 "write": true, 00:11:03.274 "unmap": true, 00:11:03.274 "flush": true, 00:11:03.274 "reset": true, 00:11:03.274 "nvme_admin": false, 00:11:03.274 "nvme_io": false, 00:11:03.274 "nvme_io_md": false, 00:11:03.274 "write_zeroes": true, 00:11:03.274 "zcopy": true, 00:11:03.274 "get_zone_info": false, 00:11:03.274 "zone_management": false, 00:11:03.274 "zone_append": false, 00:11:03.274 "compare": false, 00:11:03.274 "compare_and_write": false, 00:11:03.274 "abort": true, 00:11:03.274 "seek_hole": false, 00:11:03.274 "seek_data": false, 00:11:03.274 "copy": true, 00:11:03.274 "nvme_iov_md": false 00:11:03.274 }, 00:11:03.274 "memory_domains": [ 00:11:03.274 { 00:11:03.274 "dma_device_id": "system", 00:11:03.274 "dma_device_type": 1 00:11:03.274 }, 00:11:03.274 { 00:11:03.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.274 "dma_device_type": 2 00:11:03.274 } 00:11:03.274 ], 00:11:03.274 "driver_specific": {} 00:11:03.274 } 00:11:03.274 ] 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.274 "name": "Existed_Raid", 00:11:03.274 "uuid": "eb5ceda3-5966-43ed-a43e-2bd2e658ebc9", 00:11:03.274 "strip_size_kb": 0, 00:11:03.274 "state": "online", 00:11:03.274 "raid_level": "raid1", 00:11:03.274 "superblock": false, 00:11:03.274 "num_base_bdevs": 4, 00:11:03.274 "num_base_bdevs_discovered": 4, 00:11:03.274 "num_base_bdevs_operational": 4, 00:11:03.274 "base_bdevs_list": [ 00:11:03.274 { 00:11:03.274 "name": "NewBaseBdev", 00:11:03.274 "uuid": "c34af117-536c-4d28-8f88-5a8826615f4c", 00:11:03.274 "is_configured": true, 00:11:03.274 "data_offset": 0, 00:11:03.274 "data_size": 65536 00:11:03.274 }, 00:11:03.274 { 00:11:03.274 "name": "BaseBdev2", 00:11:03.274 "uuid": "09649802-00a4-418e-8208-98399e478aa3", 00:11:03.274 "is_configured": true, 00:11:03.274 "data_offset": 0, 00:11:03.274 "data_size": 65536 00:11:03.274 }, 00:11:03.274 { 00:11:03.274 "name": "BaseBdev3", 00:11:03.274 "uuid": "b917c088-bd1a-4048-b969-73aaea744520", 00:11:03.274 "is_configured": true, 00:11:03.274 "data_offset": 0, 00:11:03.274 "data_size": 65536 00:11:03.274 }, 00:11:03.274 { 00:11:03.274 "name": "BaseBdev4", 00:11:03.274 "uuid": "450c1e77-eb06-4b0f-a06b-194c9226ba3d", 00:11:03.274 "is_configured": true, 00:11:03.274 "data_offset": 0, 00:11:03.274 "data_size": 65536 00:11:03.274 } 00:11:03.274 ] 00:11:03.274 }' 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.274 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.844 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:03.844 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:03.844 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:03.844 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:03.844 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:03.844 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:03.844 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:03.844 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:03.844 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.845 [2024-12-07 02:44:14.719289] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:03.845 "name": "Existed_Raid", 00:11:03.845 "aliases": [ 00:11:03.845 "eb5ceda3-5966-43ed-a43e-2bd2e658ebc9" 00:11:03.845 ], 00:11:03.845 "product_name": "Raid Volume", 00:11:03.845 "block_size": 512, 00:11:03.845 "num_blocks": 65536, 00:11:03.845 "uuid": "eb5ceda3-5966-43ed-a43e-2bd2e658ebc9", 00:11:03.845 "assigned_rate_limits": { 00:11:03.845 "rw_ios_per_sec": 0, 00:11:03.845 "rw_mbytes_per_sec": 0, 00:11:03.845 "r_mbytes_per_sec": 0, 00:11:03.845 "w_mbytes_per_sec": 0 00:11:03.845 }, 00:11:03.845 "claimed": false, 00:11:03.845 "zoned": false, 00:11:03.845 "supported_io_types": { 00:11:03.845 "read": true, 00:11:03.845 "write": true, 00:11:03.845 "unmap": false, 00:11:03.845 "flush": false, 00:11:03.845 "reset": true, 00:11:03.845 "nvme_admin": false, 00:11:03.845 "nvme_io": false, 00:11:03.845 "nvme_io_md": false, 00:11:03.845 "write_zeroes": true, 00:11:03.845 "zcopy": false, 00:11:03.845 "get_zone_info": false, 00:11:03.845 "zone_management": false, 00:11:03.845 "zone_append": false, 00:11:03.845 "compare": false, 00:11:03.845 "compare_and_write": false, 00:11:03.845 "abort": false, 00:11:03.845 "seek_hole": false, 00:11:03.845 "seek_data": false, 00:11:03.845 "copy": false, 00:11:03.845 "nvme_iov_md": false 00:11:03.845 }, 00:11:03.845 "memory_domains": [ 00:11:03.845 { 00:11:03.845 "dma_device_id": "system", 00:11:03.845 "dma_device_type": 1 00:11:03.845 }, 00:11:03.845 { 00:11:03.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.845 "dma_device_type": 2 00:11:03.845 }, 00:11:03.845 { 00:11:03.845 "dma_device_id": "system", 00:11:03.845 "dma_device_type": 1 00:11:03.845 }, 00:11:03.845 { 00:11:03.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.845 "dma_device_type": 2 00:11:03.845 }, 00:11:03.845 { 00:11:03.845 "dma_device_id": "system", 00:11:03.845 "dma_device_type": 1 00:11:03.845 }, 00:11:03.845 { 00:11:03.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.845 "dma_device_type": 2 00:11:03.845 }, 00:11:03.845 { 00:11:03.845 "dma_device_id": "system", 00:11:03.845 "dma_device_type": 1 00:11:03.845 }, 00:11:03.845 { 00:11:03.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.845 "dma_device_type": 2 00:11:03.845 } 00:11:03.845 ], 00:11:03.845 "driver_specific": { 00:11:03.845 "raid": { 00:11:03.845 "uuid": "eb5ceda3-5966-43ed-a43e-2bd2e658ebc9", 00:11:03.845 "strip_size_kb": 0, 00:11:03.845 "state": "online", 00:11:03.845 "raid_level": "raid1", 00:11:03.845 "superblock": false, 00:11:03.845 "num_base_bdevs": 4, 00:11:03.845 "num_base_bdevs_discovered": 4, 00:11:03.845 "num_base_bdevs_operational": 4, 00:11:03.845 "base_bdevs_list": [ 00:11:03.845 { 00:11:03.845 "name": "NewBaseBdev", 00:11:03.845 "uuid": "c34af117-536c-4d28-8f88-5a8826615f4c", 00:11:03.845 "is_configured": true, 00:11:03.845 "data_offset": 0, 00:11:03.845 "data_size": 65536 00:11:03.845 }, 00:11:03.845 { 00:11:03.845 "name": "BaseBdev2", 00:11:03.845 "uuid": "09649802-00a4-418e-8208-98399e478aa3", 00:11:03.845 "is_configured": true, 00:11:03.845 "data_offset": 0, 00:11:03.845 "data_size": 65536 00:11:03.845 }, 00:11:03.845 { 00:11:03.845 "name": "BaseBdev3", 00:11:03.845 "uuid": "b917c088-bd1a-4048-b969-73aaea744520", 00:11:03.845 "is_configured": true, 00:11:03.845 "data_offset": 0, 00:11:03.845 "data_size": 65536 00:11:03.845 }, 00:11:03.845 { 00:11:03.845 "name": "BaseBdev4", 00:11:03.845 "uuid": "450c1e77-eb06-4b0f-a06b-194c9226ba3d", 00:11:03.845 "is_configured": true, 00:11:03.845 "data_offset": 0, 00:11:03.845 "data_size": 65536 00:11:03.845 } 00:11:03.845 ] 00:11:03.845 } 00:11:03.845 } 00:11:03.845 }' 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:03.845 BaseBdev2 00:11:03.845 BaseBdev3 00:11:03.845 BaseBdev4' 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:03.845 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.106 02:44:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.106 [2024-12-07 02:44:15.030337] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.106 [2024-12-07 02:44:15.030370] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.106 [2024-12-07 02:44:15.030509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.106 [2024-12-07 02:44:15.030806] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:04.106 [2024-12-07 02:44:15.030833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84225 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 84225 ']' 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 84225 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84225 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84225' 00:11:04.106 killing process with pid 84225 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 84225 00:11:04.106 [2024-12-07 02:44:15.079944] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:04.106 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 84225 00:11:04.106 [2024-12-07 02:44:15.159749] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:04.673 00:11:04.673 real 0m9.779s 00:11:04.673 user 0m16.271s 00:11:04.673 sys 0m2.212s 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.673 ************************************ 00:11:04.673 END TEST raid_state_function_test 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.673 ************************************ 00:11:04.673 02:44:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:04.673 02:44:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:04.673 02:44:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.673 02:44:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.673 ************************************ 00:11:04.673 START TEST raid_state_function_test_sb 00:11:04.673 ************************************ 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:04.673 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:04.674 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:04.674 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84874 00:11:04.674 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:04.674 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84874' 00:11:04.674 Process raid pid: 84874 00:11:04.674 02:44:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84874 00:11:04.674 02:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84874 ']' 00:11:04.674 02:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.674 02:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.674 02:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.674 02:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.674 02:44:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.674 [2024-12-07 02:44:15.708203] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:04.674 [2024-12-07 02:44:15.708417] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.932 [2024-12-07 02:44:15.869782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.932 [2024-12-07 02:44:15.948221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.191 [2024-12-07 02:44:16.025786] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.191 [2024-12-07 02:44:16.025822] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.760 [2024-12-07 02:44:16.538423] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.760 [2024-12-07 02:44:16.538490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.760 [2024-12-07 02:44:16.538504] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.760 [2024-12-07 02:44:16.538517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.760 [2024-12-07 02:44:16.538525] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.760 [2024-12-07 02:44:16.538539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.760 [2024-12-07 02:44:16.538544] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.760 [2024-12-07 02:44:16.538553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.760 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.761 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.761 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.761 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.761 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.761 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.761 "name": "Existed_Raid", 00:11:05.761 "uuid": "9afd26cb-3217-4820-b461-1323dfa83a67", 00:11:05.761 "strip_size_kb": 0, 00:11:05.761 "state": "configuring", 00:11:05.761 "raid_level": "raid1", 00:11:05.761 "superblock": true, 00:11:05.761 "num_base_bdevs": 4, 00:11:05.761 "num_base_bdevs_discovered": 0, 00:11:05.761 "num_base_bdevs_operational": 4, 00:11:05.761 "base_bdevs_list": [ 00:11:05.761 { 00:11:05.761 "name": "BaseBdev1", 00:11:05.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.761 "is_configured": false, 00:11:05.761 "data_offset": 0, 00:11:05.761 "data_size": 0 00:11:05.761 }, 00:11:05.761 { 00:11:05.761 "name": "BaseBdev2", 00:11:05.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.761 "is_configured": false, 00:11:05.761 "data_offset": 0, 00:11:05.761 "data_size": 0 00:11:05.761 }, 00:11:05.761 { 00:11:05.761 "name": "BaseBdev3", 00:11:05.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.761 "is_configured": false, 00:11:05.761 "data_offset": 0, 00:11:05.761 "data_size": 0 00:11:05.761 }, 00:11:05.761 { 00:11:05.761 "name": "BaseBdev4", 00:11:05.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.761 "is_configured": false, 00:11:05.761 "data_offset": 0, 00:11:05.761 "data_size": 0 00:11:05.761 } 00:11:05.761 ] 00:11:05.761 }' 00:11:05.761 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.761 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.021 [2024-12-07 02:44:16.949598] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.021 [2024-12-07 02:44:16.949740] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.021 [2024-12-07 02:44:16.961604] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.021 [2024-12-07 02:44:16.961651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.021 [2024-12-07 02:44:16.961661] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:06.021 [2024-12-07 02:44:16.961671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:06.021 [2024-12-07 02:44:16.961678] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:06.021 [2024-12-07 02:44:16.961688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:06.021 [2024-12-07 02:44:16.961694] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:06.021 [2024-12-07 02:44:16.961704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.021 [2024-12-07 02:44:16.989208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.021 BaseBdev1 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:06.021 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.022 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.022 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:06.022 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.022 02:44:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.022 [ 00:11:06.022 { 00:11:06.022 "name": "BaseBdev1", 00:11:06.022 "aliases": [ 00:11:06.022 "e4c431e8-1c62-4563-af39-042f402d2e92" 00:11:06.022 ], 00:11:06.022 "product_name": "Malloc disk", 00:11:06.022 "block_size": 512, 00:11:06.022 "num_blocks": 65536, 00:11:06.022 "uuid": "e4c431e8-1c62-4563-af39-042f402d2e92", 00:11:06.022 "assigned_rate_limits": { 00:11:06.022 "rw_ios_per_sec": 0, 00:11:06.022 "rw_mbytes_per_sec": 0, 00:11:06.022 "r_mbytes_per_sec": 0, 00:11:06.022 "w_mbytes_per_sec": 0 00:11:06.022 }, 00:11:06.022 "claimed": true, 00:11:06.022 "claim_type": "exclusive_write", 00:11:06.022 "zoned": false, 00:11:06.022 "supported_io_types": { 00:11:06.022 "read": true, 00:11:06.022 "write": true, 00:11:06.022 "unmap": true, 00:11:06.022 "flush": true, 00:11:06.022 "reset": true, 00:11:06.022 "nvme_admin": false, 00:11:06.022 "nvme_io": false, 00:11:06.022 "nvme_io_md": false, 00:11:06.022 "write_zeroes": true, 00:11:06.022 "zcopy": true, 00:11:06.022 "get_zone_info": false, 00:11:06.022 "zone_management": false, 00:11:06.022 "zone_append": false, 00:11:06.022 "compare": false, 00:11:06.022 "compare_and_write": false, 00:11:06.022 "abort": true, 00:11:06.022 "seek_hole": false, 00:11:06.022 "seek_data": false, 00:11:06.022 "copy": true, 00:11:06.022 "nvme_iov_md": false 00:11:06.022 }, 00:11:06.022 "memory_domains": [ 00:11:06.022 { 00:11:06.022 "dma_device_id": "system", 00:11:06.022 "dma_device_type": 1 00:11:06.022 }, 00:11:06.022 { 00:11:06.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.022 "dma_device_type": 2 00:11:06.022 } 00:11:06.022 ], 00:11:06.022 "driver_specific": {} 00:11:06.022 } 00:11:06.022 ] 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.022 "name": "Existed_Raid", 00:11:06.022 "uuid": "5fc07ecb-7e77-4976-842b-77b670407a33", 00:11:06.022 "strip_size_kb": 0, 00:11:06.022 "state": "configuring", 00:11:06.022 "raid_level": "raid1", 00:11:06.022 "superblock": true, 00:11:06.022 "num_base_bdevs": 4, 00:11:06.022 "num_base_bdevs_discovered": 1, 00:11:06.022 "num_base_bdevs_operational": 4, 00:11:06.022 "base_bdevs_list": [ 00:11:06.022 { 00:11:06.022 "name": "BaseBdev1", 00:11:06.022 "uuid": "e4c431e8-1c62-4563-af39-042f402d2e92", 00:11:06.022 "is_configured": true, 00:11:06.022 "data_offset": 2048, 00:11:06.022 "data_size": 63488 00:11:06.022 }, 00:11:06.022 { 00:11:06.022 "name": "BaseBdev2", 00:11:06.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.022 "is_configured": false, 00:11:06.022 "data_offset": 0, 00:11:06.022 "data_size": 0 00:11:06.022 }, 00:11:06.022 { 00:11:06.022 "name": "BaseBdev3", 00:11:06.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.022 "is_configured": false, 00:11:06.022 "data_offset": 0, 00:11:06.022 "data_size": 0 00:11:06.022 }, 00:11:06.022 { 00:11:06.022 "name": "BaseBdev4", 00:11:06.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.022 "is_configured": false, 00:11:06.022 "data_offset": 0, 00:11:06.022 "data_size": 0 00:11:06.022 } 00:11:06.022 ] 00:11:06.022 }' 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.022 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.593 [2024-12-07 02:44:17.448517] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.593 [2024-12-07 02:44:17.448719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.593 [2024-12-07 02:44:17.460492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.593 [2024-12-07 02:44:17.462721] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:06.593 [2024-12-07 02:44:17.462798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:06.593 [2024-12-07 02:44:17.462825] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:06.593 [2024-12-07 02:44:17.462847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:06.593 [2024-12-07 02:44:17.462864] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:06.593 [2024-12-07 02:44:17.462883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.593 "name": "Existed_Raid", 00:11:06.593 "uuid": "4e71158c-9802-484e-937f-5dda4c9accd6", 00:11:06.593 "strip_size_kb": 0, 00:11:06.593 "state": "configuring", 00:11:06.593 "raid_level": "raid1", 00:11:06.593 "superblock": true, 00:11:06.593 "num_base_bdevs": 4, 00:11:06.593 "num_base_bdevs_discovered": 1, 00:11:06.593 "num_base_bdevs_operational": 4, 00:11:06.593 "base_bdevs_list": [ 00:11:06.593 { 00:11:06.593 "name": "BaseBdev1", 00:11:06.593 "uuid": "e4c431e8-1c62-4563-af39-042f402d2e92", 00:11:06.593 "is_configured": true, 00:11:06.593 "data_offset": 2048, 00:11:06.593 "data_size": 63488 00:11:06.593 }, 00:11:06.593 { 00:11:06.593 "name": "BaseBdev2", 00:11:06.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.593 "is_configured": false, 00:11:06.593 "data_offset": 0, 00:11:06.593 "data_size": 0 00:11:06.593 }, 00:11:06.593 { 00:11:06.593 "name": "BaseBdev3", 00:11:06.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.593 "is_configured": false, 00:11:06.593 "data_offset": 0, 00:11:06.593 "data_size": 0 00:11:06.593 }, 00:11:06.593 { 00:11:06.593 "name": "BaseBdev4", 00:11:06.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.593 "is_configured": false, 00:11:06.593 "data_offset": 0, 00:11:06.593 "data_size": 0 00:11:06.593 } 00:11:06.593 ] 00:11:06.593 }' 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.593 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.853 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:06.853 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.853 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.112 [2024-12-07 02:44:17.945792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:07.112 BaseBdev2 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.112 [ 00:11:07.112 { 00:11:07.112 "name": "BaseBdev2", 00:11:07.112 "aliases": [ 00:11:07.112 "a71517fd-eb6e-4c5d-a4e3-30b66558b1af" 00:11:07.112 ], 00:11:07.112 "product_name": "Malloc disk", 00:11:07.112 "block_size": 512, 00:11:07.112 "num_blocks": 65536, 00:11:07.112 "uuid": "a71517fd-eb6e-4c5d-a4e3-30b66558b1af", 00:11:07.112 "assigned_rate_limits": { 00:11:07.112 "rw_ios_per_sec": 0, 00:11:07.112 "rw_mbytes_per_sec": 0, 00:11:07.112 "r_mbytes_per_sec": 0, 00:11:07.112 "w_mbytes_per_sec": 0 00:11:07.112 }, 00:11:07.112 "claimed": true, 00:11:07.112 "claim_type": "exclusive_write", 00:11:07.112 "zoned": false, 00:11:07.112 "supported_io_types": { 00:11:07.112 "read": true, 00:11:07.112 "write": true, 00:11:07.112 "unmap": true, 00:11:07.112 "flush": true, 00:11:07.112 "reset": true, 00:11:07.112 "nvme_admin": false, 00:11:07.112 "nvme_io": false, 00:11:07.112 "nvme_io_md": false, 00:11:07.112 "write_zeroes": true, 00:11:07.112 "zcopy": true, 00:11:07.112 "get_zone_info": false, 00:11:07.112 "zone_management": false, 00:11:07.112 "zone_append": false, 00:11:07.112 "compare": false, 00:11:07.112 "compare_and_write": false, 00:11:07.112 "abort": true, 00:11:07.112 "seek_hole": false, 00:11:07.112 "seek_data": false, 00:11:07.112 "copy": true, 00:11:07.112 "nvme_iov_md": false 00:11:07.112 }, 00:11:07.112 "memory_domains": [ 00:11:07.112 { 00:11:07.112 "dma_device_id": "system", 00:11:07.112 "dma_device_type": 1 00:11:07.112 }, 00:11:07.112 { 00:11:07.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.112 "dma_device_type": 2 00:11:07.112 } 00:11:07.112 ], 00:11:07.112 "driver_specific": {} 00:11:07.112 } 00:11:07.112 ] 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.112 02:44:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.112 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.112 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.112 "name": "Existed_Raid", 00:11:07.112 "uuid": "4e71158c-9802-484e-937f-5dda4c9accd6", 00:11:07.112 "strip_size_kb": 0, 00:11:07.112 "state": "configuring", 00:11:07.112 "raid_level": "raid1", 00:11:07.112 "superblock": true, 00:11:07.112 "num_base_bdevs": 4, 00:11:07.112 "num_base_bdevs_discovered": 2, 00:11:07.112 "num_base_bdevs_operational": 4, 00:11:07.112 "base_bdevs_list": [ 00:11:07.112 { 00:11:07.112 "name": "BaseBdev1", 00:11:07.112 "uuid": "e4c431e8-1c62-4563-af39-042f402d2e92", 00:11:07.112 "is_configured": true, 00:11:07.112 "data_offset": 2048, 00:11:07.112 "data_size": 63488 00:11:07.112 }, 00:11:07.112 { 00:11:07.112 "name": "BaseBdev2", 00:11:07.112 "uuid": "a71517fd-eb6e-4c5d-a4e3-30b66558b1af", 00:11:07.112 "is_configured": true, 00:11:07.112 "data_offset": 2048, 00:11:07.112 "data_size": 63488 00:11:07.112 }, 00:11:07.112 { 00:11:07.112 "name": "BaseBdev3", 00:11:07.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.112 "is_configured": false, 00:11:07.112 "data_offset": 0, 00:11:07.112 "data_size": 0 00:11:07.112 }, 00:11:07.112 { 00:11:07.112 "name": "BaseBdev4", 00:11:07.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.112 "is_configured": false, 00:11:07.112 "data_offset": 0, 00:11:07.112 "data_size": 0 00:11:07.112 } 00:11:07.112 ] 00:11:07.112 }' 00:11:07.112 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.112 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.681 [2024-12-07 02:44:18.469725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.681 BaseBdev3 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.681 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.681 [ 00:11:07.681 { 00:11:07.681 "name": "BaseBdev3", 00:11:07.681 "aliases": [ 00:11:07.681 "24d3808b-9c42-40a1-a5d5-de4d1afc653a" 00:11:07.681 ], 00:11:07.681 "product_name": "Malloc disk", 00:11:07.681 "block_size": 512, 00:11:07.681 "num_blocks": 65536, 00:11:07.681 "uuid": "24d3808b-9c42-40a1-a5d5-de4d1afc653a", 00:11:07.681 "assigned_rate_limits": { 00:11:07.681 "rw_ios_per_sec": 0, 00:11:07.681 "rw_mbytes_per_sec": 0, 00:11:07.681 "r_mbytes_per_sec": 0, 00:11:07.681 "w_mbytes_per_sec": 0 00:11:07.681 }, 00:11:07.681 "claimed": true, 00:11:07.681 "claim_type": "exclusive_write", 00:11:07.681 "zoned": false, 00:11:07.681 "supported_io_types": { 00:11:07.681 "read": true, 00:11:07.681 "write": true, 00:11:07.681 "unmap": true, 00:11:07.681 "flush": true, 00:11:07.681 "reset": true, 00:11:07.682 "nvme_admin": false, 00:11:07.682 "nvme_io": false, 00:11:07.682 "nvme_io_md": false, 00:11:07.682 "write_zeroes": true, 00:11:07.682 "zcopy": true, 00:11:07.682 "get_zone_info": false, 00:11:07.682 "zone_management": false, 00:11:07.682 "zone_append": false, 00:11:07.682 "compare": false, 00:11:07.682 "compare_and_write": false, 00:11:07.682 "abort": true, 00:11:07.682 "seek_hole": false, 00:11:07.682 "seek_data": false, 00:11:07.682 "copy": true, 00:11:07.682 "nvme_iov_md": false 00:11:07.682 }, 00:11:07.682 "memory_domains": [ 00:11:07.682 { 00:11:07.682 "dma_device_id": "system", 00:11:07.682 "dma_device_type": 1 00:11:07.682 }, 00:11:07.682 { 00:11:07.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.682 "dma_device_type": 2 00:11:07.682 } 00:11:07.682 ], 00:11:07.682 "driver_specific": {} 00:11:07.682 } 00:11:07.682 ] 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.682 "name": "Existed_Raid", 00:11:07.682 "uuid": "4e71158c-9802-484e-937f-5dda4c9accd6", 00:11:07.682 "strip_size_kb": 0, 00:11:07.682 "state": "configuring", 00:11:07.682 "raid_level": "raid1", 00:11:07.682 "superblock": true, 00:11:07.682 "num_base_bdevs": 4, 00:11:07.682 "num_base_bdevs_discovered": 3, 00:11:07.682 "num_base_bdevs_operational": 4, 00:11:07.682 "base_bdevs_list": [ 00:11:07.682 { 00:11:07.682 "name": "BaseBdev1", 00:11:07.682 "uuid": "e4c431e8-1c62-4563-af39-042f402d2e92", 00:11:07.682 "is_configured": true, 00:11:07.682 "data_offset": 2048, 00:11:07.682 "data_size": 63488 00:11:07.682 }, 00:11:07.682 { 00:11:07.682 "name": "BaseBdev2", 00:11:07.682 "uuid": "a71517fd-eb6e-4c5d-a4e3-30b66558b1af", 00:11:07.682 "is_configured": true, 00:11:07.682 "data_offset": 2048, 00:11:07.682 "data_size": 63488 00:11:07.682 }, 00:11:07.682 { 00:11:07.682 "name": "BaseBdev3", 00:11:07.682 "uuid": "24d3808b-9c42-40a1-a5d5-de4d1afc653a", 00:11:07.682 "is_configured": true, 00:11:07.682 "data_offset": 2048, 00:11:07.682 "data_size": 63488 00:11:07.682 }, 00:11:07.682 { 00:11:07.682 "name": "BaseBdev4", 00:11:07.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.682 "is_configured": false, 00:11:07.682 "data_offset": 0, 00:11:07.682 "data_size": 0 00:11:07.682 } 00:11:07.682 ] 00:11:07.682 }' 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.682 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.943 02:44:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:07.943 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.943 02:44:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.943 [2024-12-07 02:44:19.001764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:07.943 [2024-12-07 02:44:19.001995] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:07.943 [2024-12-07 02:44:19.002012] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:07.943 [2024-12-07 02:44:19.002336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:07.943 BaseBdev4 00:11:07.943 [2024-12-07 02:44:19.002477] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:07.943 [2024-12-07 02:44:19.002492] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:11:07.943 [2024-12-07 02:44:19.002647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.943 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.203 [ 00:11:08.203 { 00:11:08.203 "name": "BaseBdev4", 00:11:08.203 "aliases": [ 00:11:08.203 "8cdaf638-3449-467c-bec8-306210497896" 00:11:08.203 ], 00:11:08.203 "product_name": "Malloc disk", 00:11:08.203 "block_size": 512, 00:11:08.203 "num_blocks": 65536, 00:11:08.203 "uuid": "8cdaf638-3449-467c-bec8-306210497896", 00:11:08.203 "assigned_rate_limits": { 00:11:08.203 "rw_ios_per_sec": 0, 00:11:08.203 "rw_mbytes_per_sec": 0, 00:11:08.203 "r_mbytes_per_sec": 0, 00:11:08.203 "w_mbytes_per_sec": 0 00:11:08.203 }, 00:11:08.203 "claimed": true, 00:11:08.203 "claim_type": "exclusive_write", 00:11:08.203 "zoned": false, 00:11:08.203 "supported_io_types": { 00:11:08.203 "read": true, 00:11:08.203 "write": true, 00:11:08.203 "unmap": true, 00:11:08.203 "flush": true, 00:11:08.203 "reset": true, 00:11:08.203 "nvme_admin": false, 00:11:08.203 "nvme_io": false, 00:11:08.203 "nvme_io_md": false, 00:11:08.203 "write_zeroes": true, 00:11:08.203 "zcopy": true, 00:11:08.203 "get_zone_info": false, 00:11:08.203 "zone_management": false, 00:11:08.203 "zone_append": false, 00:11:08.203 "compare": false, 00:11:08.203 "compare_and_write": false, 00:11:08.203 "abort": true, 00:11:08.203 "seek_hole": false, 00:11:08.203 "seek_data": false, 00:11:08.203 "copy": true, 00:11:08.203 "nvme_iov_md": false 00:11:08.203 }, 00:11:08.203 "memory_domains": [ 00:11:08.203 { 00:11:08.203 "dma_device_id": "system", 00:11:08.203 "dma_device_type": 1 00:11:08.203 }, 00:11:08.203 { 00:11:08.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.203 "dma_device_type": 2 00:11:08.203 } 00:11:08.203 ], 00:11:08.203 "driver_specific": {} 00:11:08.203 } 00:11:08.203 ] 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.203 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.203 "name": "Existed_Raid", 00:11:08.203 "uuid": "4e71158c-9802-484e-937f-5dda4c9accd6", 00:11:08.203 "strip_size_kb": 0, 00:11:08.203 "state": "online", 00:11:08.203 "raid_level": "raid1", 00:11:08.203 "superblock": true, 00:11:08.203 "num_base_bdevs": 4, 00:11:08.203 "num_base_bdevs_discovered": 4, 00:11:08.203 "num_base_bdevs_operational": 4, 00:11:08.203 "base_bdevs_list": [ 00:11:08.203 { 00:11:08.203 "name": "BaseBdev1", 00:11:08.203 "uuid": "e4c431e8-1c62-4563-af39-042f402d2e92", 00:11:08.203 "is_configured": true, 00:11:08.203 "data_offset": 2048, 00:11:08.203 "data_size": 63488 00:11:08.203 }, 00:11:08.203 { 00:11:08.203 "name": "BaseBdev2", 00:11:08.203 "uuid": "a71517fd-eb6e-4c5d-a4e3-30b66558b1af", 00:11:08.203 "is_configured": true, 00:11:08.203 "data_offset": 2048, 00:11:08.203 "data_size": 63488 00:11:08.203 }, 00:11:08.203 { 00:11:08.203 "name": "BaseBdev3", 00:11:08.203 "uuid": "24d3808b-9c42-40a1-a5d5-de4d1afc653a", 00:11:08.203 "is_configured": true, 00:11:08.204 "data_offset": 2048, 00:11:08.204 "data_size": 63488 00:11:08.204 }, 00:11:08.204 { 00:11:08.204 "name": "BaseBdev4", 00:11:08.204 "uuid": "8cdaf638-3449-467c-bec8-306210497896", 00:11:08.204 "is_configured": true, 00:11:08.204 "data_offset": 2048, 00:11:08.204 "data_size": 63488 00:11:08.204 } 00:11:08.204 ] 00:11:08.204 }' 00:11:08.204 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.204 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.464 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:08.464 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:08.464 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:08.464 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:08.464 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:08.464 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:08.464 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:08.464 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:08.464 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.464 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.464 [2024-12-07 02:44:19.421445] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.464 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.464 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:08.464 "name": "Existed_Raid", 00:11:08.464 "aliases": [ 00:11:08.464 "4e71158c-9802-484e-937f-5dda4c9accd6" 00:11:08.464 ], 00:11:08.464 "product_name": "Raid Volume", 00:11:08.464 "block_size": 512, 00:11:08.464 "num_blocks": 63488, 00:11:08.464 "uuid": "4e71158c-9802-484e-937f-5dda4c9accd6", 00:11:08.464 "assigned_rate_limits": { 00:11:08.464 "rw_ios_per_sec": 0, 00:11:08.464 "rw_mbytes_per_sec": 0, 00:11:08.464 "r_mbytes_per_sec": 0, 00:11:08.464 "w_mbytes_per_sec": 0 00:11:08.464 }, 00:11:08.464 "claimed": false, 00:11:08.464 "zoned": false, 00:11:08.464 "supported_io_types": { 00:11:08.464 "read": true, 00:11:08.464 "write": true, 00:11:08.464 "unmap": false, 00:11:08.464 "flush": false, 00:11:08.464 "reset": true, 00:11:08.464 "nvme_admin": false, 00:11:08.464 "nvme_io": false, 00:11:08.464 "nvme_io_md": false, 00:11:08.464 "write_zeroes": true, 00:11:08.464 "zcopy": false, 00:11:08.464 "get_zone_info": false, 00:11:08.464 "zone_management": false, 00:11:08.464 "zone_append": false, 00:11:08.464 "compare": false, 00:11:08.464 "compare_and_write": false, 00:11:08.464 "abort": false, 00:11:08.464 "seek_hole": false, 00:11:08.464 "seek_data": false, 00:11:08.464 "copy": false, 00:11:08.464 "nvme_iov_md": false 00:11:08.464 }, 00:11:08.464 "memory_domains": [ 00:11:08.464 { 00:11:08.464 "dma_device_id": "system", 00:11:08.464 "dma_device_type": 1 00:11:08.464 }, 00:11:08.464 { 00:11:08.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.464 "dma_device_type": 2 00:11:08.464 }, 00:11:08.464 { 00:11:08.464 "dma_device_id": "system", 00:11:08.464 "dma_device_type": 1 00:11:08.464 }, 00:11:08.464 { 00:11:08.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.464 "dma_device_type": 2 00:11:08.464 }, 00:11:08.464 { 00:11:08.464 "dma_device_id": "system", 00:11:08.464 "dma_device_type": 1 00:11:08.464 }, 00:11:08.464 { 00:11:08.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.464 "dma_device_type": 2 00:11:08.464 }, 00:11:08.464 { 00:11:08.464 "dma_device_id": "system", 00:11:08.464 "dma_device_type": 1 00:11:08.464 }, 00:11:08.464 { 00:11:08.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.464 "dma_device_type": 2 00:11:08.464 } 00:11:08.464 ], 00:11:08.464 "driver_specific": { 00:11:08.464 "raid": { 00:11:08.464 "uuid": "4e71158c-9802-484e-937f-5dda4c9accd6", 00:11:08.464 "strip_size_kb": 0, 00:11:08.464 "state": "online", 00:11:08.464 "raid_level": "raid1", 00:11:08.464 "superblock": true, 00:11:08.464 "num_base_bdevs": 4, 00:11:08.464 "num_base_bdevs_discovered": 4, 00:11:08.464 "num_base_bdevs_operational": 4, 00:11:08.464 "base_bdevs_list": [ 00:11:08.464 { 00:11:08.464 "name": "BaseBdev1", 00:11:08.464 "uuid": "e4c431e8-1c62-4563-af39-042f402d2e92", 00:11:08.464 "is_configured": true, 00:11:08.464 "data_offset": 2048, 00:11:08.464 "data_size": 63488 00:11:08.464 }, 00:11:08.464 { 00:11:08.464 "name": "BaseBdev2", 00:11:08.464 "uuid": "a71517fd-eb6e-4c5d-a4e3-30b66558b1af", 00:11:08.464 "is_configured": true, 00:11:08.464 "data_offset": 2048, 00:11:08.464 "data_size": 63488 00:11:08.464 }, 00:11:08.464 { 00:11:08.464 "name": "BaseBdev3", 00:11:08.464 "uuid": "24d3808b-9c42-40a1-a5d5-de4d1afc653a", 00:11:08.464 "is_configured": true, 00:11:08.464 "data_offset": 2048, 00:11:08.464 "data_size": 63488 00:11:08.464 }, 00:11:08.464 { 00:11:08.464 "name": "BaseBdev4", 00:11:08.464 "uuid": "8cdaf638-3449-467c-bec8-306210497896", 00:11:08.464 "is_configured": true, 00:11:08.465 "data_offset": 2048, 00:11:08.465 "data_size": 63488 00:11:08.465 } 00:11:08.465 ] 00:11:08.465 } 00:11:08.465 } 00:11:08.465 }' 00:11:08.465 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:08.465 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:08.465 BaseBdev2 00:11:08.465 BaseBdev3 00:11:08.465 BaseBdev4' 00:11:08.465 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.465 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:08.465 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.465 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:08.465 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.725 [2024-12-07 02:44:19.720609] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:08.725 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.726 "name": "Existed_Raid", 00:11:08.726 "uuid": "4e71158c-9802-484e-937f-5dda4c9accd6", 00:11:08.726 "strip_size_kb": 0, 00:11:08.726 "state": "online", 00:11:08.726 "raid_level": "raid1", 00:11:08.726 "superblock": true, 00:11:08.726 "num_base_bdevs": 4, 00:11:08.726 "num_base_bdevs_discovered": 3, 00:11:08.726 "num_base_bdevs_operational": 3, 00:11:08.726 "base_bdevs_list": [ 00:11:08.726 { 00:11:08.726 "name": null, 00:11:08.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.726 "is_configured": false, 00:11:08.726 "data_offset": 0, 00:11:08.726 "data_size": 63488 00:11:08.726 }, 00:11:08.726 { 00:11:08.726 "name": "BaseBdev2", 00:11:08.726 "uuid": "a71517fd-eb6e-4c5d-a4e3-30b66558b1af", 00:11:08.726 "is_configured": true, 00:11:08.726 "data_offset": 2048, 00:11:08.726 "data_size": 63488 00:11:08.726 }, 00:11:08.726 { 00:11:08.726 "name": "BaseBdev3", 00:11:08.726 "uuid": "24d3808b-9c42-40a1-a5d5-de4d1afc653a", 00:11:08.726 "is_configured": true, 00:11:08.726 "data_offset": 2048, 00:11:08.726 "data_size": 63488 00:11:08.726 }, 00:11:08.726 { 00:11:08.726 "name": "BaseBdev4", 00:11:08.726 "uuid": "8cdaf638-3449-467c-bec8-306210497896", 00:11:08.726 "is_configured": true, 00:11:08.726 "data_offset": 2048, 00:11:08.726 "data_size": 63488 00:11:08.726 } 00:11:08.726 ] 00:11:08.726 }' 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.726 02:44:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.308 [2024-12-07 02:44:20.176564] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.308 [2024-12-07 02:44:20.249061] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.308 [2024-12-07 02:44:20.325831] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:09.308 [2024-12-07 02:44:20.326031] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.308 [2024-12-07 02:44:20.347119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.308 [2024-12-07 02:44:20.347246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.308 [2024-12-07 02:44:20.347291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.308 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.569 BaseBdev2 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.569 [ 00:11:09.569 { 00:11:09.569 "name": "BaseBdev2", 00:11:09.569 "aliases": [ 00:11:09.569 "7230f759-b2a0-4443-a3f0-26608a77ac70" 00:11:09.569 ], 00:11:09.569 "product_name": "Malloc disk", 00:11:09.569 "block_size": 512, 00:11:09.569 "num_blocks": 65536, 00:11:09.569 "uuid": "7230f759-b2a0-4443-a3f0-26608a77ac70", 00:11:09.569 "assigned_rate_limits": { 00:11:09.569 "rw_ios_per_sec": 0, 00:11:09.569 "rw_mbytes_per_sec": 0, 00:11:09.569 "r_mbytes_per_sec": 0, 00:11:09.569 "w_mbytes_per_sec": 0 00:11:09.569 }, 00:11:09.569 "claimed": false, 00:11:09.569 "zoned": false, 00:11:09.569 "supported_io_types": { 00:11:09.569 "read": true, 00:11:09.569 "write": true, 00:11:09.569 "unmap": true, 00:11:09.569 "flush": true, 00:11:09.569 "reset": true, 00:11:09.569 "nvme_admin": false, 00:11:09.569 "nvme_io": false, 00:11:09.569 "nvme_io_md": false, 00:11:09.569 "write_zeroes": true, 00:11:09.569 "zcopy": true, 00:11:09.569 "get_zone_info": false, 00:11:09.569 "zone_management": false, 00:11:09.569 "zone_append": false, 00:11:09.569 "compare": false, 00:11:09.569 "compare_and_write": false, 00:11:09.569 "abort": true, 00:11:09.569 "seek_hole": false, 00:11:09.569 "seek_data": false, 00:11:09.569 "copy": true, 00:11:09.569 "nvme_iov_md": false 00:11:09.569 }, 00:11:09.569 "memory_domains": [ 00:11:09.569 { 00:11:09.569 "dma_device_id": "system", 00:11:09.569 "dma_device_type": 1 00:11:09.569 }, 00:11:09.569 { 00:11:09.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.569 "dma_device_type": 2 00:11:09.569 } 00:11:09.569 ], 00:11:09.569 "driver_specific": {} 00:11:09.569 } 00:11:09.569 ] 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.569 BaseBdev3 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.569 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.569 [ 00:11:09.569 { 00:11:09.569 "name": "BaseBdev3", 00:11:09.569 "aliases": [ 00:11:09.569 "ccfef2d8-54f8-42c2-99a6-dcad5323a198" 00:11:09.569 ], 00:11:09.570 "product_name": "Malloc disk", 00:11:09.570 "block_size": 512, 00:11:09.570 "num_blocks": 65536, 00:11:09.570 "uuid": "ccfef2d8-54f8-42c2-99a6-dcad5323a198", 00:11:09.570 "assigned_rate_limits": { 00:11:09.570 "rw_ios_per_sec": 0, 00:11:09.570 "rw_mbytes_per_sec": 0, 00:11:09.570 "r_mbytes_per_sec": 0, 00:11:09.570 "w_mbytes_per_sec": 0 00:11:09.570 }, 00:11:09.570 "claimed": false, 00:11:09.570 "zoned": false, 00:11:09.570 "supported_io_types": { 00:11:09.570 "read": true, 00:11:09.570 "write": true, 00:11:09.570 "unmap": true, 00:11:09.570 "flush": true, 00:11:09.570 "reset": true, 00:11:09.570 "nvme_admin": false, 00:11:09.570 "nvme_io": false, 00:11:09.570 "nvme_io_md": false, 00:11:09.570 "write_zeroes": true, 00:11:09.570 "zcopy": true, 00:11:09.570 "get_zone_info": false, 00:11:09.570 "zone_management": false, 00:11:09.570 "zone_append": false, 00:11:09.570 "compare": false, 00:11:09.570 "compare_and_write": false, 00:11:09.570 "abort": true, 00:11:09.570 "seek_hole": false, 00:11:09.570 "seek_data": false, 00:11:09.570 "copy": true, 00:11:09.570 "nvme_iov_md": false 00:11:09.570 }, 00:11:09.570 "memory_domains": [ 00:11:09.570 { 00:11:09.570 "dma_device_id": "system", 00:11:09.570 "dma_device_type": 1 00:11:09.570 }, 00:11:09.570 { 00:11:09.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.570 "dma_device_type": 2 00:11:09.570 } 00:11:09.570 ], 00:11:09.570 "driver_specific": {} 00:11:09.570 } 00:11:09.570 ] 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.570 BaseBdev4 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.570 [ 00:11:09.570 { 00:11:09.570 "name": "BaseBdev4", 00:11:09.570 "aliases": [ 00:11:09.570 "e5e0d242-d7f1-40ea-9f32-bd3232f03238" 00:11:09.570 ], 00:11:09.570 "product_name": "Malloc disk", 00:11:09.570 "block_size": 512, 00:11:09.570 "num_blocks": 65536, 00:11:09.570 "uuid": "e5e0d242-d7f1-40ea-9f32-bd3232f03238", 00:11:09.570 "assigned_rate_limits": { 00:11:09.570 "rw_ios_per_sec": 0, 00:11:09.570 "rw_mbytes_per_sec": 0, 00:11:09.570 "r_mbytes_per_sec": 0, 00:11:09.570 "w_mbytes_per_sec": 0 00:11:09.570 }, 00:11:09.570 "claimed": false, 00:11:09.570 "zoned": false, 00:11:09.570 "supported_io_types": { 00:11:09.570 "read": true, 00:11:09.570 "write": true, 00:11:09.570 "unmap": true, 00:11:09.570 "flush": true, 00:11:09.570 "reset": true, 00:11:09.570 "nvme_admin": false, 00:11:09.570 "nvme_io": false, 00:11:09.570 "nvme_io_md": false, 00:11:09.570 "write_zeroes": true, 00:11:09.570 "zcopy": true, 00:11:09.570 "get_zone_info": false, 00:11:09.570 "zone_management": false, 00:11:09.570 "zone_append": false, 00:11:09.570 "compare": false, 00:11:09.570 "compare_and_write": false, 00:11:09.570 "abort": true, 00:11:09.570 "seek_hole": false, 00:11:09.570 "seek_data": false, 00:11:09.570 "copy": true, 00:11:09.570 "nvme_iov_md": false 00:11:09.570 }, 00:11:09.570 "memory_domains": [ 00:11:09.570 { 00:11:09.570 "dma_device_id": "system", 00:11:09.570 "dma_device_type": 1 00:11:09.570 }, 00:11:09.570 { 00:11:09.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.570 "dma_device_type": 2 00:11:09.570 } 00:11:09.570 ], 00:11:09.570 "driver_specific": {} 00:11:09.570 } 00:11:09.570 ] 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.570 [2024-12-07 02:44:20.581967] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.570 [2024-12-07 02:44:20.582100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.570 [2024-12-07 02:44:20.582155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.570 [2024-12-07 02:44:20.584276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.570 [2024-12-07 02:44:20.584361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.570 "name": "Existed_Raid", 00:11:09.570 "uuid": "e2db91cb-d30c-4b6d-ab99-7e4fcfe5347b", 00:11:09.570 "strip_size_kb": 0, 00:11:09.570 "state": "configuring", 00:11:09.570 "raid_level": "raid1", 00:11:09.570 "superblock": true, 00:11:09.570 "num_base_bdevs": 4, 00:11:09.570 "num_base_bdevs_discovered": 3, 00:11:09.570 "num_base_bdevs_operational": 4, 00:11:09.570 "base_bdevs_list": [ 00:11:09.570 { 00:11:09.570 "name": "BaseBdev1", 00:11:09.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.570 "is_configured": false, 00:11:09.570 "data_offset": 0, 00:11:09.570 "data_size": 0 00:11:09.570 }, 00:11:09.570 { 00:11:09.570 "name": "BaseBdev2", 00:11:09.570 "uuid": "7230f759-b2a0-4443-a3f0-26608a77ac70", 00:11:09.570 "is_configured": true, 00:11:09.570 "data_offset": 2048, 00:11:09.570 "data_size": 63488 00:11:09.570 }, 00:11:09.570 { 00:11:09.570 "name": "BaseBdev3", 00:11:09.570 "uuid": "ccfef2d8-54f8-42c2-99a6-dcad5323a198", 00:11:09.570 "is_configured": true, 00:11:09.570 "data_offset": 2048, 00:11:09.570 "data_size": 63488 00:11:09.570 }, 00:11:09.570 { 00:11:09.570 "name": "BaseBdev4", 00:11:09.570 "uuid": "e5e0d242-d7f1-40ea-9f32-bd3232f03238", 00:11:09.570 "is_configured": true, 00:11:09.570 "data_offset": 2048, 00:11:09.570 "data_size": 63488 00:11:09.570 } 00:11:09.570 ] 00:11:09.570 }' 00:11:09.570 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.571 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.140 02:44:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:10.140 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.140 02:44:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.140 [2024-12-07 02:44:21.001193] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:10.140 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.140 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.140 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.141 "name": "Existed_Raid", 00:11:10.141 "uuid": "e2db91cb-d30c-4b6d-ab99-7e4fcfe5347b", 00:11:10.141 "strip_size_kb": 0, 00:11:10.141 "state": "configuring", 00:11:10.141 "raid_level": "raid1", 00:11:10.141 "superblock": true, 00:11:10.141 "num_base_bdevs": 4, 00:11:10.141 "num_base_bdevs_discovered": 2, 00:11:10.141 "num_base_bdevs_operational": 4, 00:11:10.141 "base_bdevs_list": [ 00:11:10.141 { 00:11:10.141 "name": "BaseBdev1", 00:11:10.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.141 "is_configured": false, 00:11:10.141 "data_offset": 0, 00:11:10.141 "data_size": 0 00:11:10.141 }, 00:11:10.141 { 00:11:10.141 "name": null, 00:11:10.141 "uuid": "7230f759-b2a0-4443-a3f0-26608a77ac70", 00:11:10.141 "is_configured": false, 00:11:10.141 "data_offset": 0, 00:11:10.141 "data_size": 63488 00:11:10.141 }, 00:11:10.141 { 00:11:10.141 "name": "BaseBdev3", 00:11:10.141 "uuid": "ccfef2d8-54f8-42c2-99a6-dcad5323a198", 00:11:10.141 "is_configured": true, 00:11:10.141 "data_offset": 2048, 00:11:10.141 "data_size": 63488 00:11:10.141 }, 00:11:10.141 { 00:11:10.141 "name": "BaseBdev4", 00:11:10.141 "uuid": "e5e0d242-d7f1-40ea-9f32-bd3232f03238", 00:11:10.141 "is_configured": true, 00:11:10.141 "data_offset": 2048, 00:11:10.141 "data_size": 63488 00:11:10.141 } 00:11:10.141 ] 00:11:10.141 }' 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.141 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.400 [2024-12-07 02:44:21.469155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.400 BaseBdev1 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.400 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.660 [ 00:11:10.660 { 00:11:10.660 "name": "BaseBdev1", 00:11:10.660 "aliases": [ 00:11:10.660 "24fa35ae-c125-4d41-af7f-0b97e9064bd7" 00:11:10.660 ], 00:11:10.660 "product_name": "Malloc disk", 00:11:10.660 "block_size": 512, 00:11:10.660 "num_blocks": 65536, 00:11:10.660 "uuid": "24fa35ae-c125-4d41-af7f-0b97e9064bd7", 00:11:10.660 "assigned_rate_limits": { 00:11:10.660 "rw_ios_per_sec": 0, 00:11:10.660 "rw_mbytes_per_sec": 0, 00:11:10.660 "r_mbytes_per_sec": 0, 00:11:10.660 "w_mbytes_per_sec": 0 00:11:10.660 }, 00:11:10.660 "claimed": true, 00:11:10.660 "claim_type": "exclusive_write", 00:11:10.660 "zoned": false, 00:11:10.660 "supported_io_types": { 00:11:10.660 "read": true, 00:11:10.660 "write": true, 00:11:10.660 "unmap": true, 00:11:10.660 "flush": true, 00:11:10.660 "reset": true, 00:11:10.660 "nvme_admin": false, 00:11:10.660 "nvme_io": false, 00:11:10.660 "nvme_io_md": false, 00:11:10.660 "write_zeroes": true, 00:11:10.660 "zcopy": true, 00:11:10.660 "get_zone_info": false, 00:11:10.660 "zone_management": false, 00:11:10.660 "zone_append": false, 00:11:10.660 "compare": false, 00:11:10.660 "compare_and_write": false, 00:11:10.660 "abort": true, 00:11:10.660 "seek_hole": false, 00:11:10.660 "seek_data": false, 00:11:10.660 "copy": true, 00:11:10.660 "nvme_iov_md": false 00:11:10.660 }, 00:11:10.660 "memory_domains": [ 00:11:10.660 { 00:11:10.660 "dma_device_id": "system", 00:11:10.660 "dma_device_type": 1 00:11:10.660 }, 00:11:10.660 { 00:11:10.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.660 "dma_device_type": 2 00:11:10.660 } 00:11:10.660 ], 00:11:10.660 "driver_specific": {} 00:11:10.660 } 00:11:10.660 ] 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.660 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.660 "name": "Existed_Raid", 00:11:10.660 "uuid": "e2db91cb-d30c-4b6d-ab99-7e4fcfe5347b", 00:11:10.660 "strip_size_kb": 0, 00:11:10.661 "state": "configuring", 00:11:10.661 "raid_level": "raid1", 00:11:10.661 "superblock": true, 00:11:10.661 "num_base_bdevs": 4, 00:11:10.661 "num_base_bdevs_discovered": 3, 00:11:10.661 "num_base_bdevs_operational": 4, 00:11:10.661 "base_bdevs_list": [ 00:11:10.661 { 00:11:10.661 "name": "BaseBdev1", 00:11:10.661 "uuid": "24fa35ae-c125-4d41-af7f-0b97e9064bd7", 00:11:10.661 "is_configured": true, 00:11:10.661 "data_offset": 2048, 00:11:10.661 "data_size": 63488 00:11:10.661 }, 00:11:10.661 { 00:11:10.661 "name": null, 00:11:10.661 "uuid": "7230f759-b2a0-4443-a3f0-26608a77ac70", 00:11:10.661 "is_configured": false, 00:11:10.661 "data_offset": 0, 00:11:10.661 "data_size": 63488 00:11:10.661 }, 00:11:10.661 { 00:11:10.661 "name": "BaseBdev3", 00:11:10.661 "uuid": "ccfef2d8-54f8-42c2-99a6-dcad5323a198", 00:11:10.661 "is_configured": true, 00:11:10.661 "data_offset": 2048, 00:11:10.661 "data_size": 63488 00:11:10.661 }, 00:11:10.661 { 00:11:10.661 "name": "BaseBdev4", 00:11:10.661 "uuid": "e5e0d242-d7f1-40ea-9f32-bd3232f03238", 00:11:10.661 "is_configured": true, 00:11:10.661 "data_offset": 2048, 00:11:10.661 "data_size": 63488 00:11:10.661 } 00:11:10.661 ] 00:11:10.661 }' 00:11:10.661 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.661 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.920 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:10.920 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.920 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.920 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.920 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.920 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:10.920 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:10.920 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.920 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.920 [2024-12-07 02:44:21.992319] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:11.178 02:44:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.178 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:11.178 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.178 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.178 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.178 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.178 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.178 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.178 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.178 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.178 02:44:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.178 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.178 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.178 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.178 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.178 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.179 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.179 "name": "Existed_Raid", 00:11:11.179 "uuid": "e2db91cb-d30c-4b6d-ab99-7e4fcfe5347b", 00:11:11.179 "strip_size_kb": 0, 00:11:11.179 "state": "configuring", 00:11:11.179 "raid_level": "raid1", 00:11:11.179 "superblock": true, 00:11:11.179 "num_base_bdevs": 4, 00:11:11.179 "num_base_bdevs_discovered": 2, 00:11:11.179 "num_base_bdevs_operational": 4, 00:11:11.179 "base_bdevs_list": [ 00:11:11.179 { 00:11:11.179 "name": "BaseBdev1", 00:11:11.179 "uuid": "24fa35ae-c125-4d41-af7f-0b97e9064bd7", 00:11:11.179 "is_configured": true, 00:11:11.179 "data_offset": 2048, 00:11:11.179 "data_size": 63488 00:11:11.179 }, 00:11:11.179 { 00:11:11.179 "name": null, 00:11:11.179 "uuid": "7230f759-b2a0-4443-a3f0-26608a77ac70", 00:11:11.179 "is_configured": false, 00:11:11.179 "data_offset": 0, 00:11:11.179 "data_size": 63488 00:11:11.179 }, 00:11:11.179 { 00:11:11.179 "name": null, 00:11:11.179 "uuid": "ccfef2d8-54f8-42c2-99a6-dcad5323a198", 00:11:11.179 "is_configured": false, 00:11:11.179 "data_offset": 0, 00:11:11.179 "data_size": 63488 00:11:11.179 }, 00:11:11.179 { 00:11:11.179 "name": "BaseBdev4", 00:11:11.179 "uuid": "e5e0d242-d7f1-40ea-9f32-bd3232f03238", 00:11:11.179 "is_configured": true, 00:11:11.179 "data_offset": 2048, 00:11:11.179 "data_size": 63488 00:11:11.179 } 00:11:11.179 ] 00:11:11.179 }' 00:11:11.179 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.179 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.437 [2024-12-07 02:44:22.479550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.437 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.438 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.438 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.438 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.438 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.438 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.697 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.697 "name": "Existed_Raid", 00:11:11.697 "uuid": "e2db91cb-d30c-4b6d-ab99-7e4fcfe5347b", 00:11:11.697 "strip_size_kb": 0, 00:11:11.697 "state": "configuring", 00:11:11.697 "raid_level": "raid1", 00:11:11.697 "superblock": true, 00:11:11.697 "num_base_bdevs": 4, 00:11:11.697 "num_base_bdevs_discovered": 3, 00:11:11.697 "num_base_bdevs_operational": 4, 00:11:11.697 "base_bdevs_list": [ 00:11:11.697 { 00:11:11.697 "name": "BaseBdev1", 00:11:11.697 "uuid": "24fa35ae-c125-4d41-af7f-0b97e9064bd7", 00:11:11.697 "is_configured": true, 00:11:11.697 "data_offset": 2048, 00:11:11.697 "data_size": 63488 00:11:11.697 }, 00:11:11.697 { 00:11:11.697 "name": null, 00:11:11.697 "uuid": "7230f759-b2a0-4443-a3f0-26608a77ac70", 00:11:11.697 "is_configured": false, 00:11:11.697 "data_offset": 0, 00:11:11.697 "data_size": 63488 00:11:11.697 }, 00:11:11.697 { 00:11:11.697 "name": "BaseBdev3", 00:11:11.697 "uuid": "ccfef2d8-54f8-42c2-99a6-dcad5323a198", 00:11:11.697 "is_configured": true, 00:11:11.697 "data_offset": 2048, 00:11:11.697 "data_size": 63488 00:11:11.697 }, 00:11:11.697 { 00:11:11.697 "name": "BaseBdev4", 00:11:11.697 "uuid": "e5e0d242-d7f1-40ea-9f32-bd3232f03238", 00:11:11.698 "is_configured": true, 00:11:11.698 "data_offset": 2048, 00:11:11.698 "data_size": 63488 00:11:11.698 } 00:11:11.698 ] 00:11:11.698 }' 00:11:11.698 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.698 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.957 [2024-12-07 02:44:22.962798] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.957 02:44:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.957 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.215 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.215 "name": "Existed_Raid", 00:11:12.215 "uuid": "e2db91cb-d30c-4b6d-ab99-7e4fcfe5347b", 00:11:12.215 "strip_size_kb": 0, 00:11:12.216 "state": "configuring", 00:11:12.216 "raid_level": "raid1", 00:11:12.216 "superblock": true, 00:11:12.216 "num_base_bdevs": 4, 00:11:12.216 "num_base_bdevs_discovered": 2, 00:11:12.216 "num_base_bdevs_operational": 4, 00:11:12.216 "base_bdevs_list": [ 00:11:12.216 { 00:11:12.216 "name": null, 00:11:12.216 "uuid": "24fa35ae-c125-4d41-af7f-0b97e9064bd7", 00:11:12.216 "is_configured": false, 00:11:12.216 "data_offset": 0, 00:11:12.216 "data_size": 63488 00:11:12.216 }, 00:11:12.216 { 00:11:12.216 "name": null, 00:11:12.216 "uuid": "7230f759-b2a0-4443-a3f0-26608a77ac70", 00:11:12.216 "is_configured": false, 00:11:12.216 "data_offset": 0, 00:11:12.216 "data_size": 63488 00:11:12.216 }, 00:11:12.216 { 00:11:12.216 "name": "BaseBdev3", 00:11:12.216 "uuid": "ccfef2d8-54f8-42c2-99a6-dcad5323a198", 00:11:12.216 "is_configured": true, 00:11:12.216 "data_offset": 2048, 00:11:12.216 "data_size": 63488 00:11:12.216 }, 00:11:12.216 { 00:11:12.216 "name": "BaseBdev4", 00:11:12.216 "uuid": "e5e0d242-d7f1-40ea-9f32-bd3232f03238", 00:11:12.216 "is_configured": true, 00:11:12.216 "data_offset": 2048, 00:11:12.216 "data_size": 63488 00:11:12.216 } 00:11:12.216 ] 00:11:12.216 }' 00:11:12.216 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.216 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.474 [2024-12-07 02:44:23.513905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.474 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.734 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.734 "name": "Existed_Raid", 00:11:12.734 "uuid": "e2db91cb-d30c-4b6d-ab99-7e4fcfe5347b", 00:11:12.734 "strip_size_kb": 0, 00:11:12.734 "state": "configuring", 00:11:12.734 "raid_level": "raid1", 00:11:12.734 "superblock": true, 00:11:12.734 "num_base_bdevs": 4, 00:11:12.734 "num_base_bdevs_discovered": 3, 00:11:12.734 "num_base_bdevs_operational": 4, 00:11:12.734 "base_bdevs_list": [ 00:11:12.734 { 00:11:12.734 "name": null, 00:11:12.734 "uuid": "24fa35ae-c125-4d41-af7f-0b97e9064bd7", 00:11:12.734 "is_configured": false, 00:11:12.734 "data_offset": 0, 00:11:12.734 "data_size": 63488 00:11:12.734 }, 00:11:12.734 { 00:11:12.734 "name": "BaseBdev2", 00:11:12.734 "uuid": "7230f759-b2a0-4443-a3f0-26608a77ac70", 00:11:12.734 "is_configured": true, 00:11:12.734 "data_offset": 2048, 00:11:12.734 "data_size": 63488 00:11:12.734 }, 00:11:12.734 { 00:11:12.734 "name": "BaseBdev3", 00:11:12.734 "uuid": "ccfef2d8-54f8-42c2-99a6-dcad5323a198", 00:11:12.734 "is_configured": true, 00:11:12.734 "data_offset": 2048, 00:11:12.734 "data_size": 63488 00:11:12.734 }, 00:11:12.734 { 00:11:12.734 "name": "BaseBdev4", 00:11:12.734 "uuid": "e5e0d242-d7f1-40ea-9f32-bd3232f03238", 00:11:12.734 "is_configured": true, 00:11:12.734 "data_offset": 2048, 00:11:12.734 "data_size": 63488 00:11:12.734 } 00:11:12.734 ] 00:11:12.734 }' 00:11:12.734 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.734 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.993 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.994 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.994 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.994 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:12.994 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.994 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:12.994 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.994 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.994 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.994 02:44:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:12.994 02:44:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 24fa35ae-c125-4d41-af7f-0b97e9064bd7 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.994 [2024-12-07 02:44:24.045800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:12.994 [2024-12-07 02:44:24.046015] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:12.994 [2024-12-07 02:44:24.046035] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:12.994 [2024-12-07 02:44:24.046339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:12.994 NewBaseBdev 00:11:12.994 [2024-12-07 02:44:24.046488] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:12.994 [2024-12-07 02:44:24.046499] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:11:12.994 [2024-12-07 02:44:24.046636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.994 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.254 [ 00:11:13.254 { 00:11:13.254 "name": "NewBaseBdev", 00:11:13.254 "aliases": [ 00:11:13.254 "24fa35ae-c125-4d41-af7f-0b97e9064bd7" 00:11:13.254 ], 00:11:13.254 "product_name": "Malloc disk", 00:11:13.254 "block_size": 512, 00:11:13.254 "num_blocks": 65536, 00:11:13.254 "uuid": "24fa35ae-c125-4d41-af7f-0b97e9064bd7", 00:11:13.254 "assigned_rate_limits": { 00:11:13.254 "rw_ios_per_sec": 0, 00:11:13.254 "rw_mbytes_per_sec": 0, 00:11:13.254 "r_mbytes_per_sec": 0, 00:11:13.254 "w_mbytes_per_sec": 0 00:11:13.254 }, 00:11:13.254 "claimed": true, 00:11:13.254 "claim_type": "exclusive_write", 00:11:13.254 "zoned": false, 00:11:13.254 "supported_io_types": { 00:11:13.254 "read": true, 00:11:13.254 "write": true, 00:11:13.254 "unmap": true, 00:11:13.254 "flush": true, 00:11:13.254 "reset": true, 00:11:13.254 "nvme_admin": false, 00:11:13.254 "nvme_io": false, 00:11:13.254 "nvme_io_md": false, 00:11:13.254 "write_zeroes": true, 00:11:13.254 "zcopy": true, 00:11:13.254 "get_zone_info": false, 00:11:13.254 "zone_management": false, 00:11:13.254 "zone_append": false, 00:11:13.254 "compare": false, 00:11:13.254 "compare_and_write": false, 00:11:13.254 "abort": true, 00:11:13.254 "seek_hole": false, 00:11:13.254 "seek_data": false, 00:11:13.254 "copy": true, 00:11:13.254 "nvme_iov_md": false 00:11:13.254 }, 00:11:13.254 "memory_domains": [ 00:11:13.254 { 00:11:13.254 "dma_device_id": "system", 00:11:13.254 "dma_device_type": 1 00:11:13.254 }, 00:11:13.254 { 00:11:13.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.254 "dma_device_type": 2 00:11:13.254 } 00:11:13.254 ], 00:11:13.254 "driver_specific": {} 00:11:13.254 } 00:11:13.254 ] 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.254 "name": "Existed_Raid", 00:11:13.254 "uuid": "e2db91cb-d30c-4b6d-ab99-7e4fcfe5347b", 00:11:13.254 "strip_size_kb": 0, 00:11:13.254 "state": "online", 00:11:13.254 "raid_level": "raid1", 00:11:13.254 "superblock": true, 00:11:13.254 "num_base_bdevs": 4, 00:11:13.254 "num_base_bdevs_discovered": 4, 00:11:13.254 "num_base_bdevs_operational": 4, 00:11:13.254 "base_bdevs_list": [ 00:11:13.254 { 00:11:13.254 "name": "NewBaseBdev", 00:11:13.254 "uuid": "24fa35ae-c125-4d41-af7f-0b97e9064bd7", 00:11:13.254 "is_configured": true, 00:11:13.254 "data_offset": 2048, 00:11:13.254 "data_size": 63488 00:11:13.254 }, 00:11:13.254 { 00:11:13.254 "name": "BaseBdev2", 00:11:13.254 "uuid": "7230f759-b2a0-4443-a3f0-26608a77ac70", 00:11:13.254 "is_configured": true, 00:11:13.254 "data_offset": 2048, 00:11:13.254 "data_size": 63488 00:11:13.254 }, 00:11:13.254 { 00:11:13.254 "name": "BaseBdev3", 00:11:13.254 "uuid": "ccfef2d8-54f8-42c2-99a6-dcad5323a198", 00:11:13.254 "is_configured": true, 00:11:13.254 "data_offset": 2048, 00:11:13.254 "data_size": 63488 00:11:13.254 }, 00:11:13.254 { 00:11:13.254 "name": "BaseBdev4", 00:11:13.254 "uuid": "e5e0d242-d7f1-40ea-9f32-bd3232f03238", 00:11:13.254 "is_configured": true, 00:11:13.254 "data_offset": 2048, 00:11:13.254 "data_size": 63488 00:11:13.254 } 00:11:13.254 ] 00:11:13.254 }' 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.254 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.514 [2024-12-07 02:44:24.445465] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.514 "name": "Existed_Raid", 00:11:13.514 "aliases": [ 00:11:13.514 "e2db91cb-d30c-4b6d-ab99-7e4fcfe5347b" 00:11:13.514 ], 00:11:13.514 "product_name": "Raid Volume", 00:11:13.514 "block_size": 512, 00:11:13.514 "num_blocks": 63488, 00:11:13.514 "uuid": "e2db91cb-d30c-4b6d-ab99-7e4fcfe5347b", 00:11:13.514 "assigned_rate_limits": { 00:11:13.514 "rw_ios_per_sec": 0, 00:11:13.514 "rw_mbytes_per_sec": 0, 00:11:13.514 "r_mbytes_per_sec": 0, 00:11:13.514 "w_mbytes_per_sec": 0 00:11:13.514 }, 00:11:13.514 "claimed": false, 00:11:13.514 "zoned": false, 00:11:13.514 "supported_io_types": { 00:11:13.514 "read": true, 00:11:13.514 "write": true, 00:11:13.514 "unmap": false, 00:11:13.514 "flush": false, 00:11:13.514 "reset": true, 00:11:13.514 "nvme_admin": false, 00:11:13.514 "nvme_io": false, 00:11:13.514 "nvme_io_md": false, 00:11:13.514 "write_zeroes": true, 00:11:13.514 "zcopy": false, 00:11:13.514 "get_zone_info": false, 00:11:13.514 "zone_management": false, 00:11:13.514 "zone_append": false, 00:11:13.514 "compare": false, 00:11:13.514 "compare_and_write": false, 00:11:13.514 "abort": false, 00:11:13.514 "seek_hole": false, 00:11:13.514 "seek_data": false, 00:11:13.514 "copy": false, 00:11:13.514 "nvme_iov_md": false 00:11:13.514 }, 00:11:13.514 "memory_domains": [ 00:11:13.514 { 00:11:13.514 "dma_device_id": "system", 00:11:13.514 "dma_device_type": 1 00:11:13.514 }, 00:11:13.514 { 00:11:13.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.514 "dma_device_type": 2 00:11:13.514 }, 00:11:13.514 { 00:11:13.514 "dma_device_id": "system", 00:11:13.514 "dma_device_type": 1 00:11:13.514 }, 00:11:13.514 { 00:11:13.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.514 "dma_device_type": 2 00:11:13.514 }, 00:11:13.514 { 00:11:13.514 "dma_device_id": "system", 00:11:13.514 "dma_device_type": 1 00:11:13.514 }, 00:11:13.514 { 00:11:13.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.514 "dma_device_type": 2 00:11:13.514 }, 00:11:13.514 { 00:11:13.514 "dma_device_id": "system", 00:11:13.514 "dma_device_type": 1 00:11:13.514 }, 00:11:13.514 { 00:11:13.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.514 "dma_device_type": 2 00:11:13.514 } 00:11:13.514 ], 00:11:13.514 "driver_specific": { 00:11:13.514 "raid": { 00:11:13.514 "uuid": "e2db91cb-d30c-4b6d-ab99-7e4fcfe5347b", 00:11:13.514 "strip_size_kb": 0, 00:11:13.514 "state": "online", 00:11:13.514 "raid_level": "raid1", 00:11:13.514 "superblock": true, 00:11:13.514 "num_base_bdevs": 4, 00:11:13.514 "num_base_bdevs_discovered": 4, 00:11:13.514 "num_base_bdevs_operational": 4, 00:11:13.514 "base_bdevs_list": [ 00:11:13.514 { 00:11:13.514 "name": "NewBaseBdev", 00:11:13.514 "uuid": "24fa35ae-c125-4d41-af7f-0b97e9064bd7", 00:11:13.514 "is_configured": true, 00:11:13.514 "data_offset": 2048, 00:11:13.514 "data_size": 63488 00:11:13.514 }, 00:11:13.514 { 00:11:13.514 "name": "BaseBdev2", 00:11:13.514 "uuid": "7230f759-b2a0-4443-a3f0-26608a77ac70", 00:11:13.514 "is_configured": true, 00:11:13.514 "data_offset": 2048, 00:11:13.514 "data_size": 63488 00:11:13.514 }, 00:11:13.514 { 00:11:13.514 "name": "BaseBdev3", 00:11:13.514 "uuid": "ccfef2d8-54f8-42c2-99a6-dcad5323a198", 00:11:13.514 "is_configured": true, 00:11:13.514 "data_offset": 2048, 00:11:13.514 "data_size": 63488 00:11:13.514 }, 00:11:13.514 { 00:11:13.514 "name": "BaseBdev4", 00:11:13.514 "uuid": "e5e0d242-d7f1-40ea-9f32-bd3232f03238", 00:11:13.514 "is_configured": true, 00:11:13.514 "data_offset": 2048, 00:11:13.514 "data_size": 63488 00:11:13.514 } 00:11:13.514 ] 00:11:13.514 } 00:11:13.514 } 00:11:13.514 }' 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:13.514 BaseBdev2 00:11:13.514 BaseBdev3 00:11:13.514 BaseBdev4' 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.514 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.775 [2024-12-07 02:44:24.724687] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.775 [2024-12-07 02:44:24.724771] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.775 [2024-12-07 02:44:24.724911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.775 [2024-12-07 02:44:24.725223] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.775 [2024-12-07 02:44:24.725283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84874 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84874 ']' 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84874 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84874 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84874' 00:11:13.775 killing process with pid 84874 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84874 00:11:13.775 [2024-12-07 02:44:24.760413] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.775 02:44:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84874 00:11:13.775 [2024-12-07 02:44:24.839566] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:14.384 02:44:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:14.384 00:11:14.384 real 0m9.603s 00:11:14.384 user 0m16.042s 00:11:14.384 sys 0m2.101s 00:11:14.384 02:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:14.384 02:44:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.384 ************************************ 00:11:14.384 END TEST raid_state_function_test_sb 00:11:14.384 ************************************ 00:11:14.384 02:44:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:14.384 02:44:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:14.384 02:44:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:14.384 02:44:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:14.384 ************************************ 00:11:14.384 START TEST raid_superblock_test 00:11:14.384 ************************************ 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85528 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85528 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 85528 ']' 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.384 02:44:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.384 [2024-12-07 02:44:25.375289] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:14.384 [2024-12-07 02:44:25.375505] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85528 ] 00:11:14.662 [2024-12-07 02:44:25.537575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.662 [2024-12-07 02:44:25.607447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.662 [2024-12-07 02:44:25.684000] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.662 [2024-12-07 02:44:25.684153] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.243 malloc1 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.243 [2024-12-07 02:44:26.218655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:15.243 [2024-12-07 02:44:26.218827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.243 [2024-12-07 02:44:26.218867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:15.243 [2024-12-07 02:44:26.218906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.243 [2024-12-07 02:44:26.221362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.243 [2024-12-07 02:44:26.221437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:15.243 pt1 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.243 malloc2 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.243 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.243 [2024-12-07 02:44:26.268893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:15.243 [2024-12-07 02:44:26.269042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.243 [2024-12-07 02:44:26.269086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:15.243 [2024-12-07 02:44:26.269134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.243 [2024-12-07 02:44:26.271791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.243 [2024-12-07 02:44:26.271869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:15.243 pt2 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.244 malloc3 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.244 [2024-12-07 02:44:26.303736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:15.244 [2024-12-07 02:44:26.303866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.244 [2024-12-07 02:44:26.303910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:15.244 [2024-12-07 02:44:26.303943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.244 [2024-12-07 02:44:26.306291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.244 [2024-12-07 02:44:26.306363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:15.244 pt3 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.244 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.503 malloc4 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.503 [2024-12-07 02:44:26.342692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:15.503 [2024-12-07 02:44:26.342755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.503 [2024-12-07 02:44:26.342771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:15.503 [2024-12-07 02:44:26.342787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.503 [2024-12-07 02:44:26.345138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.503 [2024-12-07 02:44:26.345237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:15.503 pt4 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.503 [2024-12-07 02:44:26.354755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:15.503 [2024-12-07 02:44:26.356847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:15.503 [2024-12-07 02:44:26.356904] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:15.503 [2024-12-07 02:44:26.356942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:15.503 [2024-12-07 02:44:26.357091] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:15.503 [2024-12-07 02:44:26.357110] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:15.503 [2024-12-07 02:44:26.357369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:15.503 [2024-12-07 02:44:26.357520] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:15.503 [2024-12-07 02:44:26.357530] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:15.503 [2024-12-07 02:44:26.357698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.503 "name": "raid_bdev1", 00:11:15.503 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:15.503 "strip_size_kb": 0, 00:11:15.503 "state": "online", 00:11:15.503 "raid_level": "raid1", 00:11:15.503 "superblock": true, 00:11:15.503 "num_base_bdevs": 4, 00:11:15.503 "num_base_bdevs_discovered": 4, 00:11:15.503 "num_base_bdevs_operational": 4, 00:11:15.503 "base_bdevs_list": [ 00:11:15.503 { 00:11:15.503 "name": "pt1", 00:11:15.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.503 "is_configured": true, 00:11:15.503 "data_offset": 2048, 00:11:15.503 "data_size": 63488 00:11:15.503 }, 00:11:15.503 { 00:11:15.503 "name": "pt2", 00:11:15.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.503 "is_configured": true, 00:11:15.503 "data_offset": 2048, 00:11:15.503 "data_size": 63488 00:11:15.503 }, 00:11:15.503 { 00:11:15.503 "name": "pt3", 00:11:15.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.503 "is_configured": true, 00:11:15.503 "data_offset": 2048, 00:11:15.503 "data_size": 63488 00:11:15.503 }, 00:11:15.503 { 00:11:15.503 "name": "pt4", 00:11:15.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:15.503 "is_configured": true, 00:11:15.503 "data_offset": 2048, 00:11:15.503 "data_size": 63488 00:11:15.503 } 00:11:15.503 ] 00:11:15.503 }' 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.503 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.762 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:15.762 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:15.762 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.762 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.762 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.762 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.762 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:15.762 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.762 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.762 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.762 [2024-12-07 02:44:26.742423] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.762 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.762 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.762 "name": "raid_bdev1", 00:11:15.762 "aliases": [ 00:11:15.762 "ae8c8dfb-4246-44c7-8919-aad42af07e2a" 00:11:15.762 ], 00:11:15.762 "product_name": "Raid Volume", 00:11:15.762 "block_size": 512, 00:11:15.762 "num_blocks": 63488, 00:11:15.762 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:15.762 "assigned_rate_limits": { 00:11:15.762 "rw_ios_per_sec": 0, 00:11:15.762 "rw_mbytes_per_sec": 0, 00:11:15.762 "r_mbytes_per_sec": 0, 00:11:15.762 "w_mbytes_per_sec": 0 00:11:15.762 }, 00:11:15.762 "claimed": false, 00:11:15.762 "zoned": false, 00:11:15.762 "supported_io_types": { 00:11:15.762 "read": true, 00:11:15.762 "write": true, 00:11:15.762 "unmap": false, 00:11:15.762 "flush": false, 00:11:15.762 "reset": true, 00:11:15.762 "nvme_admin": false, 00:11:15.762 "nvme_io": false, 00:11:15.762 "nvme_io_md": false, 00:11:15.762 "write_zeroes": true, 00:11:15.762 "zcopy": false, 00:11:15.762 "get_zone_info": false, 00:11:15.762 "zone_management": false, 00:11:15.762 "zone_append": false, 00:11:15.762 "compare": false, 00:11:15.762 "compare_and_write": false, 00:11:15.762 "abort": false, 00:11:15.762 "seek_hole": false, 00:11:15.762 "seek_data": false, 00:11:15.762 "copy": false, 00:11:15.762 "nvme_iov_md": false 00:11:15.762 }, 00:11:15.762 "memory_domains": [ 00:11:15.762 { 00:11:15.762 "dma_device_id": "system", 00:11:15.762 "dma_device_type": 1 00:11:15.762 }, 00:11:15.762 { 00:11:15.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.762 "dma_device_type": 2 00:11:15.762 }, 00:11:15.762 { 00:11:15.762 "dma_device_id": "system", 00:11:15.762 "dma_device_type": 1 00:11:15.762 }, 00:11:15.762 { 00:11:15.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.763 "dma_device_type": 2 00:11:15.763 }, 00:11:15.763 { 00:11:15.763 "dma_device_id": "system", 00:11:15.763 "dma_device_type": 1 00:11:15.763 }, 00:11:15.763 { 00:11:15.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.763 "dma_device_type": 2 00:11:15.763 }, 00:11:15.763 { 00:11:15.763 "dma_device_id": "system", 00:11:15.763 "dma_device_type": 1 00:11:15.763 }, 00:11:15.763 { 00:11:15.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.763 "dma_device_type": 2 00:11:15.763 } 00:11:15.763 ], 00:11:15.763 "driver_specific": { 00:11:15.763 "raid": { 00:11:15.763 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:15.763 "strip_size_kb": 0, 00:11:15.763 "state": "online", 00:11:15.763 "raid_level": "raid1", 00:11:15.763 "superblock": true, 00:11:15.763 "num_base_bdevs": 4, 00:11:15.763 "num_base_bdevs_discovered": 4, 00:11:15.763 "num_base_bdevs_operational": 4, 00:11:15.763 "base_bdevs_list": [ 00:11:15.763 { 00:11:15.763 "name": "pt1", 00:11:15.763 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.763 "is_configured": true, 00:11:15.763 "data_offset": 2048, 00:11:15.763 "data_size": 63488 00:11:15.763 }, 00:11:15.763 { 00:11:15.763 "name": "pt2", 00:11:15.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.763 "is_configured": true, 00:11:15.763 "data_offset": 2048, 00:11:15.763 "data_size": 63488 00:11:15.763 }, 00:11:15.763 { 00:11:15.763 "name": "pt3", 00:11:15.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.763 "is_configured": true, 00:11:15.763 "data_offset": 2048, 00:11:15.763 "data_size": 63488 00:11:15.763 }, 00:11:15.763 { 00:11:15.763 "name": "pt4", 00:11:15.763 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:15.763 "is_configured": true, 00:11:15.763 "data_offset": 2048, 00:11:15.763 "data_size": 63488 00:11:15.763 } 00:11:15.763 ] 00:11:15.763 } 00:11:15.763 } 00:11:15.763 }' 00:11:15.763 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.763 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:15.763 pt2 00:11:15.763 pt3 00:11:15.763 pt4' 00:11:15.763 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.023 02:44:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.023 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:16.023 [2024-12-07 02:44:27.085705] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ae8c8dfb-4246-44c7-8919-aad42af07e2a 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ae8c8dfb-4246-44c7-8919-aad42af07e2a ']' 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.283 [2024-12-07 02:44:27.133315] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.283 [2024-12-07 02:44:27.133388] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.283 [2024-12-07 02:44:27.133479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.283 [2024-12-07 02:44:27.133578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.283 [2024-12-07 02:44:27.133604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.283 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.283 [2024-12-07 02:44:27.297089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:16.283 [2024-12-07 02:44:27.299276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:16.283 [2024-12-07 02:44:27.299329] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:16.283 [2024-12-07 02:44:27.299357] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:16.283 [2024-12-07 02:44:27.299407] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:16.283 [2024-12-07 02:44:27.299468] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:16.283 [2024-12-07 02:44:27.299487] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:16.283 [2024-12-07 02:44:27.299503] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:16.283 [2024-12-07 02:44:27.299517] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.283 [2024-12-07 02:44:27.299528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:11:16.283 request: 00:11:16.283 { 00:11:16.283 "name": "raid_bdev1", 00:11:16.283 "raid_level": "raid1", 00:11:16.283 "base_bdevs": [ 00:11:16.283 "malloc1", 00:11:16.283 "malloc2", 00:11:16.283 "malloc3", 00:11:16.283 "malloc4" 00:11:16.283 ], 00:11:16.283 "superblock": false, 00:11:16.283 "method": "bdev_raid_create", 00:11:16.283 "req_id": 1 00:11:16.284 } 00:11:16.284 Got JSON-RPC error response 00:11:16.284 response: 00:11:16.284 { 00:11:16.284 "code": -17, 00:11:16.284 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:16.284 } 00:11:16.284 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:16.284 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:16.284 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:16.284 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:16.284 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:16.284 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.284 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:16.284 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.284 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.284 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.284 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:16.284 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.543 [2024-12-07 02:44:27.364934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:16.543 [2024-12-07 02:44:27.365054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.543 [2024-12-07 02:44:27.365096] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:16.543 [2024-12-07 02:44:27.365127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.543 [2024-12-07 02:44:27.367658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.543 [2024-12-07 02:44:27.367728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:16.543 [2024-12-07 02:44:27.367844] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:16.543 [2024-12-07 02:44:27.367929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:16.543 pt1 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.543 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.544 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.544 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.544 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.544 "name": "raid_bdev1", 00:11:16.544 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:16.544 "strip_size_kb": 0, 00:11:16.544 "state": "configuring", 00:11:16.544 "raid_level": "raid1", 00:11:16.544 "superblock": true, 00:11:16.544 "num_base_bdevs": 4, 00:11:16.544 "num_base_bdevs_discovered": 1, 00:11:16.544 "num_base_bdevs_operational": 4, 00:11:16.544 "base_bdevs_list": [ 00:11:16.544 { 00:11:16.544 "name": "pt1", 00:11:16.544 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.544 "is_configured": true, 00:11:16.544 "data_offset": 2048, 00:11:16.544 "data_size": 63488 00:11:16.544 }, 00:11:16.544 { 00:11:16.544 "name": null, 00:11:16.544 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.544 "is_configured": false, 00:11:16.544 "data_offset": 2048, 00:11:16.544 "data_size": 63488 00:11:16.544 }, 00:11:16.544 { 00:11:16.544 "name": null, 00:11:16.544 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.544 "is_configured": false, 00:11:16.544 "data_offset": 2048, 00:11:16.544 "data_size": 63488 00:11:16.544 }, 00:11:16.544 { 00:11:16.544 "name": null, 00:11:16.544 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.544 "is_configured": false, 00:11:16.544 "data_offset": 2048, 00:11:16.544 "data_size": 63488 00:11:16.544 } 00:11:16.544 ] 00:11:16.544 }' 00:11:16.544 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.544 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.803 [2024-12-07 02:44:27.796181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:16.803 [2024-12-07 02:44:27.796249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.803 [2024-12-07 02:44:27.796272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:16.803 [2024-12-07 02:44:27.796283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.803 [2024-12-07 02:44:27.796761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.803 [2024-12-07 02:44:27.796779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:16.803 [2024-12-07 02:44:27.796871] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:16.803 [2024-12-07 02:44:27.796909] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:16.803 pt2 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.803 [2024-12-07 02:44:27.808180] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.803 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.803 "name": "raid_bdev1", 00:11:16.803 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:16.803 "strip_size_kb": 0, 00:11:16.803 "state": "configuring", 00:11:16.803 "raid_level": "raid1", 00:11:16.803 "superblock": true, 00:11:16.803 "num_base_bdevs": 4, 00:11:16.803 "num_base_bdevs_discovered": 1, 00:11:16.803 "num_base_bdevs_operational": 4, 00:11:16.803 "base_bdevs_list": [ 00:11:16.803 { 00:11:16.803 "name": "pt1", 00:11:16.803 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.803 "is_configured": true, 00:11:16.803 "data_offset": 2048, 00:11:16.803 "data_size": 63488 00:11:16.803 }, 00:11:16.803 { 00:11:16.804 "name": null, 00:11:16.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.804 "is_configured": false, 00:11:16.804 "data_offset": 0, 00:11:16.804 "data_size": 63488 00:11:16.804 }, 00:11:16.804 { 00:11:16.804 "name": null, 00:11:16.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.804 "is_configured": false, 00:11:16.804 "data_offset": 2048, 00:11:16.804 "data_size": 63488 00:11:16.804 }, 00:11:16.804 { 00:11:16.804 "name": null, 00:11:16.804 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:16.804 "is_configured": false, 00:11:16.804 "data_offset": 2048, 00:11:16.804 "data_size": 63488 00:11:16.804 } 00:11:16.804 ] 00:11:16.804 }' 00:11:16.804 02:44:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.804 02:44:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.373 [2024-12-07 02:44:28.223543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:17.373 [2024-12-07 02:44:28.223747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.373 [2024-12-07 02:44:28.223787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:17.373 [2024-12-07 02:44:28.223818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.373 [2024-12-07 02:44:28.224316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.373 [2024-12-07 02:44:28.224383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:17.373 [2024-12-07 02:44:28.224503] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:17.373 [2024-12-07 02:44:28.224559] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.373 pt2 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.373 [2024-12-07 02:44:28.235397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:17.373 [2024-12-07 02:44:28.235461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.373 [2024-12-07 02:44:28.235494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:17.373 [2024-12-07 02:44:28.235505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.373 [2024-12-07 02:44:28.235875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.373 [2024-12-07 02:44:28.235895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:17.373 [2024-12-07 02:44:28.235952] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:17.373 [2024-12-07 02:44:28.235979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:17.373 pt3 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:17.373 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.374 [2024-12-07 02:44:28.247377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:17.374 [2024-12-07 02:44:28.247426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.374 [2024-12-07 02:44:28.247445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:17.374 [2024-12-07 02:44:28.247455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.374 [2024-12-07 02:44:28.247811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.374 [2024-12-07 02:44:28.247830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:17.374 [2024-12-07 02:44:28.247883] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:17.374 [2024-12-07 02:44:28.247903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:17.374 [2024-12-07 02:44:28.248005] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:11:17.374 [2024-12-07 02:44:28.248023] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:17.374 [2024-12-07 02:44:28.248275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:17.374 [2024-12-07 02:44:28.248400] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:11:17.374 [2024-12-07 02:44:28.248409] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:11:17.374 [2024-12-07 02:44:28.248514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.374 pt4 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.374 "name": "raid_bdev1", 00:11:17.374 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:17.374 "strip_size_kb": 0, 00:11:17.374 "state": "online", 00:11:17.374 "raid_level": "raid1", 00:11:17.374 "superblock": true, 00:11:17.374 "num_base_bdevs": 4, 00:11:17.374 "num_base_bdevs_discovered": 4, 00:11:17.374 "num_base_bdevs_operational": 4, 00:11:17.374 "base_bdevs_list": [ 00:11:17.374 { 00:11:17.374 "name": "pt1", 00:11:17.374 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.374 "is_configured": true, 00:11:17.374 "data_offset": 2048, 00:11:17.374 "data_size": 63488 00:11:17.374 }, 00:11:17.374 { 00:11:17.374 "name": "pt2", 00:11:17.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.374 "is_configured": true, 00:11:17.374 "data_offset": 2048, 00:11:17.374 "data_size": 63488 00:11:17.374 }, 00:11:17.374 { 00:11:17.374 "name": "pt3", 00:11:17.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.374 "is_configured": true, 00:11:17.374 "data_offset": 2048, 00:11:17.374 "data_size": 63488 00:11:17.374 }, 00:11:17.374 { 00:11:17.374 "name": "pt4", 00:11:17.374 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.374 "is_configured": true, 00:11:17.374 "data_offset": 2048, 00:11:17.374 "data_size": 63488 00:11:17.374 } 00:11:17.374 ] 00:11:17.374 }' 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.374 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.944 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:17.944 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:17.944 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.944 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.944 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.944 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.945 [2024-12-07 02:44:28.742844] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.945 "name": "raid_bdev1", 00:11:17.945 "aliases": [ 00:11:17.945 "ae8c8dfb-4246-44c7-8919-aad42af07e2a" 00:11:17.945 ], 00:11:17.945 "product_name": "Raid Volume", 00:11:17.945 "block_size": 512, 00:11:17.945 "num_blocks": 63488, 00:11:17.945 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:17.945 "assigned_rate_limits": { 00:11:17.945 "rw_ios_per_sec": 0, 00:11:17.945 "rw_mbytes_per_sec": 0, 00:11:17.945 "r_mbytes_per_sec": 0, 00:11:17.945 "w_mbytes_per_sec": 0 00:11:17.945 }, 00:11:17.945 "claimed": false, 00:11:17.945 "zoned": false, 00:11:17.945 "supported_io_types": { 00:11:17.945 "read": true, 00:11:17.945 "write": true, 00:11:17.945 "unmap": false, 00:11:17.945 "flush": false, 00:11:17.945 "reset": true, 00:11:17.945 "nvme_admin": false, 00:11:17.945 "nvme_io": false, 00:11:17.945 "nvme_io_md": false, 00:11:17.945 "write_zeroes": true, 00:11:17.945 "zcopy": false, 00:11:17.945 "get_zone_info": false, 00:11:17.945 "zone_management": false, 00:11:17.945 "zone_append": false, 00:11:17.945 "compare": false, 00:11:17.945 "compare_and_write": false, 00:11:17.945 "abort": false, 00:11:17.945 "seek_hole": false, 00:11:17.945 "seek_data": false, 00:11:17.945 "copy": false, 00:11:17.945 "nvme_iov_md": false 00:11:17.945 }, 00:11:17.945 "memory_domains": [ 00:11:17.945 { 00:11:17.945 "dma_device_id": "system", 00:11:17.945 "dma_device_type": 1 00:11:17.945 }, 00:11:17.945 { 00:11:17.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.945 "dma_device_type": 2 00:11:17.945 }, 00:11:17.945 { 00:11:17.945 "dma_device_id": "system", 00:11:17.945 "dma_device_type": 1 00:11:17.945 }, 00:11:17.945 { 00:11:17.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.945 "dma_device_type": 2 00:11:17.945 }, 00:11:17.945 { 00:11:17.945 "dma_device_id": "system", 00:11:17.945 "dma_device_type": 1 00:11:17.945 }, 00:11:17.945 { 00:11:17.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.945 "dma_device_type": 2 00:11:17.945 }, 00:11:17.945 { 00:11:17.945 "dma_device_id": "system", 00:11:17.945 "dma_device_type": 1 00:11:17.945 }, 00:11:17.945 { 00:11:17.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.945 "dma_device_type": 2 00:11:17.945 } 00:11:17.945 ], 00:11:17.945 "driver_specific": { 00:11:17.945 "raid": { 00:11:17.945 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:17.945 "strip_size_kb": 0, 00:11:17.945 "state": "online", 00:11:17.945 "raid_level": "raid1", 00:11:17.945 "superblock": true, 00:11:17.945 "num_base_bdevs": 4, 00:11:17.945 "num_base_bdevs_discovered": 4, 00:11:17.945 "num_base_bdevs_operational": 4, 00:11:17.945 "base_bdevs_list": [ 00:11:17.945 { 00:11:17.945 "name": "pt1", 00:11:17.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.945 "is_configured": true, 00:11:17.945 "data_offset": 2048, 00:11:17.945 "data_size": 63488 00:11:17.945 }, 00:11:17.945 { 00:11:17.945 "name": "pt2", 00:11:17.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.945 "is_configured": true, 00:11:17.945 "data_offset": 2048, 00:11:17.945 "data_size": 63488 00:11:17.945 }, 00:11:17.945 { 00:11:17.945 "name": "pt3", 00:11:17.945 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.945 "is_configured": true, 00:11:17.945 "data_offset": 2048, 00:11:17.945 "data_size": 63488 00:11:17.945 }, 00:11:17.945 { 00:11:17.945 "name": "pt4", 00:11:17.945 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.945 "is_configured": true, 00:11:17.945 "data_offset": 2048, 00:11:17.945 "data_size": 63488 00:11:17.945 } 00:11:17.945 ] 00:11:17.945 } 00:11:17.945 } 00:11:17.945 }' 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:17.945 pt2 00:11:17.945 pt3 00:11:17.945 pt4' 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.945 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:17.946 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.946 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.946 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.946 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.946 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.946 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.946 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.946 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:17.946 02:44:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.946 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.946 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.946 02:44:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.946 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.946 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.946 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.946 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.946 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.946 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:17.946 [2024-12-07 02:44:29.014390] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ae8c8dfb-4246-44c7-8919-aad42af07e2a '!=' ae8c8dfb-4246-44c7-8919-aad42af07e2a ']' 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.206 [2024-12-07 02:44:29.046067] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.206 "name": "raid_bdev1", 00:11:18.206 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:18.206 "strip_size_kb": 0, 00:11:18.206 "state": "online", 00:11:18.206 "raid_level": "raid1", 00:11:18.206 "superblock": true, 00:11:18.206 "num_base_bdevs": 4, 00:11:18.206 "num_base_bdevs_discovered": 3, 00:11:18.206 "num_base_bdevs_operational": 3, 00:11:18.206 "base_bdevs_list": [ 00:11:18.206 { 00:11:18.206 "name": null, 00:11:18.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.206 "is_configured": false, 00:11:18.206 "data_offset": 0, 00:11:18.206 "data_size": 63488 00:11:18.206 }, 00:11:18.206 { 00:11:18.206 "name": "pt2", 00:11:18.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.206 "is_configured": true, 00:11:18.206 "data_offset": 2048, 00:11:18.206 "data_size": 63488 00:11:18.206 }, 00:11:18.206 { 00:11:18.206 "name": "pt3", 00:11:18.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.206 "is_configured": true, 00:11:18.206 "data_offset": 2048, 00:11:18.206 "data_size": 63488 00:11:18.206 }, 00:11:18.206 { 00:11:18.206 "name": "pt4", 00:11:18.206 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.206 "is_configured": true, 00:11:18.206 "data_offset": 2048, 00:11:18.206 "data_size": 63488 00:11:18.206 } 00:11:18.206 ] 00:11:18.206 }' 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.206 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.467 [2024-12-07 02:44:29.449291] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.467 [2024-12-07 02:44:29.449361] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.467 [2024-12-07 02:44:29.449441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.467 [2024-12-07 02:44:29.449514] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.467 [2024-12-07 02:44:29.449527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.467 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.468 [2024-12-07 02:44:29.533154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:18.468 [2024-12-07 02:44:29.533258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.468 [2024-12-07 02:44:29.533280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:18.468 [2024-12-07 02:44:29.533292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.468 [2024-12-07 02:44:29.535752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.468 [2024-12-07 02:44:29.535791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:18.468 [2024-12-07 02:44:29.535861] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:18.468 [2024-12-07 02:44:29.535896] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:18.468 pt2 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.468 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.728 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.728 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.728 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.728 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.728 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.728 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.728 "name": "raid_bdev1", 00:11:18.728 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:18.728 "strip_size_kb": 0, 00:11:18.728 "state": "configuring", 00:11:18.728 "raid_level": "raid1", 00:11:18.728 "superblock": true, 00:11:18.728 "num_base_bdevs": 4, 00:11:18.728 "num_base_bdevs_discovered": 1, 00:11:18.728 "num_base_bdevs_operational": 3, 00:11:18.728 "base_bdevs_list": [ 00:11:18.728 { 00:11:18.728 "name": null, 00:11:18.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.728 "is_configured": false, 00:11:18.728 "data_offset": 2048, 00:11:18.728 "data_size": 63488 00:11:18.728 }, 00:11:18.728 { 00:11:18.728 "name": "pt2", 00:11:18.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.728 "is_configured": true, 00:11:18.728 "data_offset": 2048, 00:11:18.728 "data_size": 63488 00:11:18.728 }, 00:11:18.728 { 00:11:18.728 "name": null, 00:11:18.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.728 "is_configured": false, 00:11:18.728 "data_offset": 2048, 00:11:18.728 "data_size": 63488 00:11:18.728 }, 00:11:18.728 { 00:11:18.728 "name": null, 00:11:18.728 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.728 "is_configured": false, 00:11:18.728 "data_offset": 2048, 00:11:18.728 "data_size": 63488 00:11:18.728 } 00:11:18.728 ] 00:11:18.728 }' 00:11:18.728 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.728 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.989 [2024-12-07 02:44:29.948475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:18.989 [2024-12-07 02:44:29.948600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.989 [2024-12-07 02:44:29.948644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:18.989 [2024-12-07 02:44:29.948683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.989 [2024-12-07 02:44:29.949125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.989 [2024-12-07 02:44:29.949183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:18.989 [2024-12-07 02:44:29.949276] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:18.989 [2024-12-07 02:44:29.949328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:18.989 pt3 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.989 02:44:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.989 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.989 "name": "raid_bdev1", 00:11:18.989 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:18.989 "strip_size_kb": 0, 00:11:18.989 "state": "configuring", 00:11:18.989 "raid_level": "raid1", 00:11:18.989 "superblock": true, 00:11:18.989 "num_base_bdevs": 4, 00:11:18.989 "num_base_bdevs_discovered": 2, 00:11:18.989 "num_base_bdevs_operational": 3, 00:11:18.989 "base_bdevs_list": [ 00:11:18.989 { 00:11:18.989 "name": null, 00:11:18.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.989 "is_configured": false, 00:11:18.989 "data_offset": 2048, 00:11:18.989 "data_size": 63488 00:11:18.989 }, 00:11:18.989 { 00:11:18.989 "name": "pt2", 00:11:18.989 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.989 "is_configured": true, 00:11:18.989 "data_offset": 2048, 00:11:18.989 "data_size": 63488 00:11:18.989 }, 00:11:18.989 { 00:11:18.989 "name": "pt3", 00:11:18.989 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.989 "is_configured": true, 00:11:18.989 "data_offset": 2048, 00:11:18.989 "data_size": 63488 00:11:18.989 }, 00:11:18.989 { 00:11:18.989 "name": null, 00:11:18.989 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.989 "is_configured": false, 00:11:18.989 "data_offset": 2048, 00:11:18.989 "data_size": 63488 00:11:18.989 } 00:11:18.989 ] 00:11:18.989 }' 00:11:18.989 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.989 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.560 [2024-12-07 02:44:30.435670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:19.560 [2024-12-07 02:44:30.435743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.560 [2024-12-07 02:44:30.435785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:19.560 [2024-12-07 02:44:30.435799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.560 [2024-12-07 02:44:30.436228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.560 [2024-12-07 02:44:30.436255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:19.560 [2024-12-07 02:44:30.436336] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:19.560 [2024-12-07 02:44:30.436368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:19.560 [2024-12-07 02:44:30.436480] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:11:19.560 [2024-12-07 02:44:30.436492] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:19.560 [2024-12-07 02:44:30.436776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:19.560 [2024-12-07 02:44:30.436923] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:11:19.560 [2024-12-07 02:44:30.436940] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:11:19.560 [2024-12-07 02:44:30.437058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.560 pt4 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.560 "name": "raid_bdev1", 00:11:19.560 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:19.560 "strip_size_kb": 0, 00:11:19.560 "state": "online", 00:11:19.560 "raid_level": "raid1", 00:11:19.560 "superblock": true, 00:11:19.560 "num_base_bdevs": 4, 00:11:19.560 "num_base_bdevs_discovered": 3, 00:11:19.560 "num_base_bdevs_operational": 3, 00:11:19.560 "base_bdevs_list": [ 00:11:19.560 { 00:11:19.560 "name": null, 00:11:19.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.560 "is_configured": false, 00:11:19.560 "data_offset": 2048, 00:11:19.560 "data_size": 63488 00:11:19.560 }, 00:11:19.560 { 00:11:19.560 "name": "pt2", 00:11:19.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.560 "is_configured": true, 00:11:19.560 "data_offset": 2048, 00:11:19.560 "data_size": 63488 00:11:19.560 }, 00:11:19.560 { 00:11:19.560 "name": "pt3", 00:11:19.560 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.560 "is_configured": true, 00:11:19.560 "data_offset": 2048, 00:11:19.560 "data_size": 63488 00:11:19.560 }, 00:11:19.560 { 00:11:19.560 "name": "pt4", 00:11:19.560 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:19.560 "is_configured": true, 00:11:19.560 "data_offset": 2048, 00:11:19.560 "data_size": 63488 00:11:19.560 } 00:11:19.560 ] 00:11:19.560 }' 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.560 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:19.821 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.821 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 [2024-12-07 02:44:30.862986] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:19.821 [2024-12-07 02:44:30.863066] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.821 [2024-12-07 02:44:30.863160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.821 [2024-12-07 02:44:30.863251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.821 [2024-12-07 02:44:30.863339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:11:19.821 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.821 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.821 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:19.821 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.821 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.082 [2024-12-07 02:44:30.938878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:20.082 [2024-12-07 02:44:30.938971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.082 [2024-12-07 02:44:30.939012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:20.082 [2024-12-07 02:44:30.939040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.082 [2024-12-07 02:44:30.941568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.082 [2024-12-07 02:44:30.941650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:20.082 [2024-12-07 02:44:30.941763] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:20.082 [2024-12-07 02:44:30.941825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:20.082 [2024-12-07 02:44:30.941964] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:20.082 [2024-12-07 02:44:30.942027] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.082 [2024-12-07 02:44:30.942068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:11:20.082 [2024-12-07 02:44:30.942154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:20.082 [2024-12-07 02:44:30.942293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:20.082 pt1 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.082 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.083 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.083 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.083 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.083 "name": "raid_bdev1", 00:11:20.083 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:20.083 "strip_size_kb": 0, 00:11:20.083 "state": "configuring", 00:11:20.083 "raid_level": "raid1", 00:11:20.083 "superblock": true, 00:11:20.083 "num_base_bdevs": 4, 00:11:20.083 "num_base_bdevs_discovered": 2, 00:11:20.083 "num_base_bdevs_operational": 3, 00:11:20.083 "base_bdevs_list": [ 00:11:20.083 { 00:11:20.083 "name": null, 00:11:20.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.083 "is_configured": false, 00:11:20.083 "data_offset": 2048, 00:11:20.083 "data_size": 63488 00:11:20.083 }, 00:11:20.083 { 00:11:20.083 "name": "pt2", 00:11:20.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.083 "is_configured": true, 00:11:20.083 "data_offset": 2048, 00:11:20.083 "data_size": 63488 00:11:20.083 }, 00:11:20.083 { 00:11:20.083 "name": "pt3", 00:11:20.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.083 "is_configured": true, 00:11:20.083 "data_offset": 2048, 00:11:20.083 "data_size": 63488 00:11:20.083 }, 00:11:20.083 { 00:11:20.083 "name": null, 00:11:20.083 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.083 "is_configured": false, 00:11:20.083 "data_offset": 2048, 00:11:20.083 "data_size": 63488 00:11:20.083 } 00:11:20.083 ] 00:11:20.083 }' 00:11:20.083 02:44:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.083 02:44:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.343 [2024-12-07 02:44:31.406033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:20.343 [2024-12-07 02:44:31.406145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.343 [2024-12-07 02:44:31.406183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:20.343 [2024-12-07 02:44:31.406214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.343 [2024-12-07 02:44:31.406716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.343 [2024-12-07 02:44:31.406781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:20.343 [2024-12-07 02:44:31.406880] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:20.343 [2024-12-07 02:44:31.406933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:20.343 [2024-12-07 02:44:31.407063] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:20.343 [2024-12-07 02:44:31.407106] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:20.343 [2024-12-07 02:44:31.407391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:20.343 [2024-12-07 02:44:31.407560] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:20.343 [2024-12-07 02:44:31.407615] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:20.343 [2024-12-07 02:44:31.407775] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.343 pt4 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.343 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.344 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.344 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.344 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.603 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.603 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.603 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.604 "name": "raid_bdev1", 00:11:20.604 "uuid": "ae8c8dfb-4246-44c7-8919-aad42af07e2a", 00:11:20.604 "strip_size_kb": 0, 00:11:20.604 "state": "online", 00:11:20.604 "raid_level": "raid1", 00:11:20.604 "superblock": true, 00:11:20.604 "num_base_bdevs": 4, 00:11:20.604 "num_base_bdevs_discovered": 3, 00:11:20.604 "num_base_bdevs_operational": 3, 00:11:20.604 "base_bdevs_list": [ 00:11:20.604 { 00:11:20.604 "name": null, 00:11:20.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.604 "is_configured": false, 00:11:20.604 "data_offset": 2048, 00:11:20.604 "data_size": 63488 00:11:20.604 }, 00:11:20.604 { 00:11:20.604 "name": "pt2", 00:11:20.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.604 "is_configured": true, 00:11:20.604 "data_offset": 2048, 00:11:20.604 "data_size": 63488 00:11:20.604 }, 00:11:20.604 { 00:11:20.604 "name": "pt3", 00:11:20.604 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.604 "is_configured": true, 00:11:20.604 "data_offset": 2048, 00:11:20.604 "data_size": 63488 00:11:20.604 }, 00:11:20.604 { 00:11:20.604 "name": "pt4", 00:11:20.604 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.604 "is_configured": true, 00:11:20.604 "data_offset": 2048, 00:11:20.604 "data_size": 63488 00:11:20.604 } 00:11:20.604 ] 00:11:20.604 }' 00:11:20.604 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.604 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.864 [2024-12-07 02:44:31.821620] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ae8c8dfb-4246-44c7-8919-aad42af07e2a '!=' ae8c8dfb-4246-44c7-8919-aad42af07e2a ']' 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85528 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 85528 ']' 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 85528 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85528 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:20.864 killing process with pid 85528 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85528' 00:11:20.864 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 85528 00:11:20.864 [2024-12-07 02:44:31.890738] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:20.865 [2024-12-07 02:44:31.890823] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.865 02:44:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 85528 00:11:20.865 [2024-12-07 02:44:31.890903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.865 [2024-12-07 02:44:31.890913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:21.125 [2024-12-07 02:44:31.973068] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.386 02:44:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:21.386 00:11:21.386 real 0m7.056s 00:11:21.386 user 0m11.588s 00:11:21.386 sys 0m1.588s 00:11:21.386 02:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.386 02:44:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.386 ************************************ 00:11:21.386 END TEST raid_superblock_test 00:11:21.386 ************************************ 00:11:21.386 02:44:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:21.386 02:44:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:21.386 02:44:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.386 02:44:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.386 ************************************ 00:11:21.386 START TEST raid_read_error_test 00:11:21.386 ************************************ 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1mfHlzTcKx 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85999 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85999 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85999 ']' 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.386 02:44:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.647 [2024-12-07 02:44:32.529301] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:21.647 [2024-12-07 02:44:32.529502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85999 ] 00:11:21.647 [2024-12-07 02:44:32.695778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.908 [2024-12-07 02:44:32.767387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.908 [2024-12-07 02:44:32.844663] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.908 [2024-12-07 02:44:32.844765] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.479 BaseBdev1_malloc 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.479 true 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.479 [2024-12-07 02:44:33.379018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:22.479 [2024-12-07 02:44:33.379085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.479 [2024-12-07 02:44:33.379107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:22.479 [2024-12-07 02:44:33.379116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.479 [2024-12-07 02:44:33.381528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.479 [2024-12-07 02:44:33.381629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:22.479 BaseBdev1 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.479 BaseBdev2_malloc 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.479 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.479 true 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.480 [2024-12-07 02:44:33.434775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:22.480 [2024-12-07 02:44:33.434828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.480 [2024-12-07 02:44:33.434846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:22.480 [2024-12-07 02:44:33.434854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.480 [2024-12-07 02:44:33.437173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.480 [2024-12-07 02:44:33.437256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:22.480 BaseBdev2 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.480 BaseBdev3_malloc 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.480 true 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.480 [2024-12-07 02:44:33.481537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:22.480 [2024-12-07 02:44:33.481603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.480 [2024-12-07 02:44:33.481624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:22.480 [2024-12-07 02:44:33.481633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.480 [2024-12-07 02:44:33.483924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.480 [2024-12-07 02:44:33.483960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:22.480 BaseBdev3 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.480 BaseBdev4_malloc 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.480 true 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.480 [2024-12-07 02:44:33.528125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:22.480 [2024-12-07 02:44:33.528173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.480 [2024-12-07 02:44:33.528212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:22.480 [2024-12-07 02:44:33.528221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.480 [2024-12-07 02:44:33.530536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.480 [2024-12-07 02:44:33.530573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:22.480 BaseBdev4 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.480 [2024-12-07 02:44:33.540165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.480 [2024-12-07 02:44:33.542253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.480 [2024-12-07 02:44:33.542338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.480 [2024-12-07 02:44:33.542392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:22.480 [2024-12-07 02:44:33.542606] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:22.480 [2024-12-07 02:44:33.542619] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:22.480 [2024-12-07 02:44:33.542928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:22.480 [2024-12-07 02:44:33.543068] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:22.480 [2024-12-07 02:44:33.543087] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:22.480 [2024-12-07 02:44:33.543226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.480 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.741 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.741 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.741 "name": "raid_bdev1", 00:11:22.741 "uuid": "b9ab9cb6-58c5-485e-a2d4-56f699e25711", 00:11:22.741 "strip_size_kb": 0, 00:11:22.741 "state": "online", 00:11:22.741 "raid_level": "raid1", 00:11:22.741 "superblock": true, 00:11:22.741 "num_base_bdevs": 4, 00:11:22.741 "num_base_bdevs_discovered": 4, 00:11:22.741 "num_base_bdevs_operational": 4, 00:11:22.741 "base_bdevs_list": [ 00:11:22.741 { 00:11:22.741 "name": "BaseBdev1", 00:11:22.741 "uuid": "0e56777c-e9ce-5f6f-ab85-95046055520a", 00:11:22.741 "is_configured": true, 00:11:22.741 "data_offset": 2048, 00:11:22.741 "data_size": 63488 00:11:22.741 }, 00:11:22.741 { 00:11:22.741 "name": "BaseBdev2", 00:11:22.741 "uuid": "95ade646-0f0d-5d5b-80f9-74535aabb7fb", 00:11:22.741 "is_configured": true, 00:11:22.741 "data_offset": 2048, 00:11:22.741 "data_size": 63488 00:11:22.741 }, 00:11:22.741 { 00:11:22.741 "name": "BaseBdev3", 00:11:22.741 "uuid": "bcb24a6f-fba7-558c-b42c-f0de1f82fabc", 00:11:22.741 "is_configured": true, 00:11:22.741 "data_offset": 2048, 00:11:22.741 "data_size": 63488 00:11:22.741 }, 00:11:22.741 { 00:11:22.741 "name": "BaseBdev4", 00:11:22.741 "uuid": "9688afbe-cb96-578f-a18b-6f1f66b9d8ff", 00:11:22.741 "is_configured": true, 00:11:22.741 "data_offset": 2048, 00:11:22.741 "data_size": 63488 00:11:22.741 } 00:11:22.741 ] 00:11:22.741 }' 00:11:22.741 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.741 02:44:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.002 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:23.002 02:44:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:23.262 [2024-12-07 02:44:34.087717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:24.207 02:44:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:24.207 02:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.207 02:44:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.207 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.207 "name": "raid_bdev1", 00:11:24.207 "uuid": "b9ab9cb6-58c5-485e-a2d4-56f699e25711", 00:11:24.207 "strip_size_kb": 0, 00:11:24.207 "state": "online", 00:11:24.207 "raid_level": "raid1", 00:11:24.207 "superblock": true, 00:11:24.207 "num_base_bdevs": 4, 00:11:24.207 "num_base_bdevs_discovered": 4, 00:11:24.207 "num_base_bdevs_operational": 4, 00:11:24.207 "base_bdevs_list": [ 00:11:24.207 { 00:11:24.207 "name": "BaseBdev1", 00:11:24.207 "uuid": "0e56777c-e9ce-5f6f-ab85-95046055520a", 00:11:24.207 "is_configured": true, 00:11:24.207 "data_offset": 2048, 00:11:24.207 "data_size": 63488 00:11:24.208 }, 00:11:24.208 { 00:11:24.208 "name": "BaseBdev2", 00:11:24.208 "uuid": "95ade646-0f0d-5d5b-80f9-74535aabb7fb", 00:11:24.208 "is_configured": true, 00:11:24.208 "data_offset": 2048, 00:11:24.208 "data_size": 63488 00:11:24.208 }, 00:11:24.208 { 00:11:24.208 "name": "BaseBdev3", 00:11:24.208 "uuid": "bcb24a6f-fba7-558c-b42c-f0de1f82fabc", 00:11:24.208 "is_configured": true, 00:11:24.208 "data_offset": 2048, 00:11:24.208 "data_size": 63488 00:11:24.208 }, 00:11:24.208 { 00:11:24.208 "name": "BaseBdev4", 00:11:24.208 "uuid": "9688afbe-cb96-578f-a18b-6f1f66b9d8ff", 00:11:24.208 "is_configured": true, 00:11:24.208 "data_offset": 2048, 00:11:24.208 "data_size": 63488 00:11:24.208 } 00:11:24.208 ] 00:11:24.208 }' 00:11:24.208 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.208 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.467 [2024-12-07 02:44:35.417301] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:24.467 [2024-12-07 02:44:35.417417] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.467 [2024-12-07 02:44:35.420181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.467 [2024-12-07 02:44:35.420291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.467 [2024-12-07 02:44:35.420454] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.467 [2024-12-07 02:44:35.420507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:24.467 { 00:11:24.467 "results": [ 00:11:24.467 { 00:11:24.467 "job": "raid_bdev1", 00:11:24.467 "core_mask": "0x1", 00:11:24.467 "workload": "randrw", 00:11:24.467 "percentage": 50, 00:11:24.467 "status": "finished", 00:11:24.467 "queue_depth": 1, 00:11:24.467 "io_size": 131072, 00:11:24.467 "runtime": 1.330296, 00:11:24.467 "iops": 8579.29363089117, 00:11:24.467 "mibps": 1072.4117038613963, 00:11:24.467 "io_failed": 0, 00:11:24.467 "io_timeout": 0, 00:11:24.467 "avg_latency_us": 114.00791849637488, 00:11:24.467 "min_latency_us": 22.134497816593885, 00:11:24.467 "max_latency_us": 1466.6899563318777 00:11:24.467 } 00:11:24.467 ], 00:11:24.467 "core_count": 1 00:11:24.467 } 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85999 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85999 ']' 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85999 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85999 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85999' 00:11:24.467 killing process with pid 85999 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85999 00:11:24.467 [2024-12-07 02:44:35.463318] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.467 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85999 00:11:24.467 [2024-12-07 02:44:35.532047] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.037 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1mfHlzTcKx 00:11:25.037 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:25.037 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:25.037 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:25.037 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:25.037 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.037 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:25.037 02:44:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:25.037 00:11:25.037 real 0m3.493s 00:11:25.037 user 0m4.202s 00:11:25.037 sys 0m0.666s 00:11:25.037 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.037 02:44:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.037 ************************************ 00:11:25.037 END TEST raid_read_error_test 00:11:25.037 ************************************ 00:11:25.037 02:44:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:25.037 02:44:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:25.037 02:44:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.037 02:44:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.037 ************************************ 00:11:25.037 START TEST raid_write_error_test 00:11:25.037 ************************************ 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:25.037 02:44:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0SSsqGZ3In 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=86128 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 86128 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 86128 ']' 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:25.037 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.037 [2024-12-07 02:44:36.104695] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:25.037 [2024-12-07 02:44:36.104839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86128 ] 00:11:25.298 [2024-12-07 02:44:36.269219] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.298 [2024-12-07 02:44:36.341349] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.558 [2024-12-07 02:44:36.419083] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.558 [2024-12-07 02:44:36.419210] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.129 BaseBdev1_malloc 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.129 true 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.129 [2024-12-07 02:44:36.973393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:26.129 [2024-12-07 02:44:36.973493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.129 [2024-12-07 02:44:36.973518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:26.129 [2024-12-07 02:44:36.973528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.129 [2024-12-07 02:44:36.975924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.129 [2024-12-07 02:44:36.975965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:26.129 BaseBdev1 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.129 02:44:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.129 BaseBdev2_malloc 00:11:26.129 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.129 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:26.129 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.129 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.129 true 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.130 [2024-12-07 02:44:37.029703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:26.130 [2024-12-07 02:44:37.029770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.130 [2024-12-07 02:44:37.029807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:26.130 [2024-12-07 02:44:37.029816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.130 [2024-12-07 02:44:37.032151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.130 [2024-12-07 02:44:37.032253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:26.130 BaseBdev2 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.130 BaseBdev3_malloc 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.130 true 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.130 [2024-12-07 02:44:37.076408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:26.130 [2024-12-07 02:44:37.076461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.130 [2024-12-07 02:44:37.076480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:26.130 [2024-12-07 02:44:37.076490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.130 [2024-12-07 02:44:37.078824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.130 [2024-12-07 02:44:37.078893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:26.130 BaseBdev3 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.130 BaseBdev4_malloc 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.130 true 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.130 [2024-12-07 02:44:37.122905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:26.130 [2024-12-07 02:44:37.122952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.130 [2024-12-07 02:44:37.122990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:26.130 [2024-12-07 02:44:37.122999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.130 [2024-12-07 02:44:37.125338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.130 [2024-12-07 02:44:37.125375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:26.130 BaseBdev4 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.130 [2024-12-07 02:44:37.134940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.130 [2024-12-07 02:44:37.137167] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.130 [2024-12-07 02:44:37.137257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.130 [2024-12-07 02:44:37.137311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.130 [2024-12-07 02:44:37.137512] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007080 00:11:26.130 [2024-12-07 02:44:37.137523] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:26.130 [2024-12-07 02:44:37.137804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:26.130 [2024-12-07 02:44:37.137965] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007080 00:11:26.130 [2024-12-07 02:44:37.138019] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007080 00:11:26.130 [2024-12-07 02:44:37.138148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.130 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.130 "name": "raid_bdev1", 00:11:26.130 "uuid": "fa6e8e7f-0f03-4942-81eb-fde8524d0e57", 00:11:26.130 "strip_size_kb": 0, 00:11:26.130 "state": "online", 00:11:26.131 "raid_level": "raid1", 00:11:26.131 "superblock": true, 00:11:26.131 "num_base_bdevs": 4, 00:11:26.131 "num_base_bdevs_discovered": 4, 00:11:26.131 "num_base_bdevs_operational": 4, 00:11:26.131 "base_bdevs_list": [ 00:11:26.131 { 00:11:26.131 "name": "BaseBdev1", 00:11:26.131 "uuid": "9a4ca310-6b91-57d1-87be-0377f8905489", 00:11:26.131 "is_configured": true, 00:11:26.131 "data_offset": 2048, 00:11:26.131 "data_size": 63488 00:11:26.131 }, 00:11:26.131 { 00:11:26.131 "name": "BaseBdev2", 00:11:26.131 "uuid": "61568846-b0a7-5f20-9a75-b3b3d0906394", 00:11:26.131 "is_configured": true, 00:11:26.131 "data_offset": 2048, 00:11:26.131 "data_size": 63488 00:11:26.131 }, 00:11:26.131 { 00:11:26.131 "name": "BaseBdev3", 00:11:26.131 "uuid": "38921f68-a8e8-5ed9-b232-abe277c648ab", 00:11:26.131 "is_configured": true, 00:11:26.131 "data_offset": 2048, 00:11:26.131 "data_size": 63488 00:11:26.131 }, 00:11:26.131 { 00:11:26.131 "name": "BaseBdev4", 00:11:26.131 "uuid": "258d340f-57d8-529b-9614-36676632fd28", 00:11:26.131 "is_configured": true, 00:11:26.131 "data_offset": 2048, 00:11:26.131 "data_size": 63488 00:11:26.131 } 00:11:26.131 ] 00:11:26.131 }' 00:11:26.131 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.131 02:44:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.702 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:26.702 02:44:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:26.702 [2024-12-07 02:44:37.650526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:27.642 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:27.642 02:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.643 [2024-12-07 02:44:38.571300] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:27.643 [2024-12-07 02:44:38.571476] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:27.643 [2024-12-07 02:44:38.571755] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.643 "name": "raid_bdev1", 00:11:27.643 "uuid": "fa6e8e7f-0f03-4942-81eb-fde8524d0e57", 00:11:27.643 "strip_size_kb": 0, 00:11:27.643 "state": "online", 00:11:27.643 "raid_level": "raid1", 00:11:27.643 "superblock": true, 00:11:27.643 "num_base_bdevs": 4, 00:11:27.643 "num_base_bdevs_discovered": 3, 00:11:27.643 "num_base_bdevs_operational": 3, 00:11:27.643 "base_bdevs_list": [ 00:11:27.643 { 00:11:27.643 "name": null, 00:11:27.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.643 "is_configured": false, 00:11:27.643 "data_offset": 0, 00:11:27.643 "data_size": 63488 00:11:27.643 }, 00:11:27.643 { 00:11:27.643 "name": "BaseBdev2", 00:11:27.643 "uuid": "61568846-b0a7-5f20-9a75-b3b3d0906394", 00:11:27.643 "is_configured": true, 00:11:27.643 "data_offset": 2048, 00:11:27.643 "data_size": 63488 00:11:27.643 }, 00:11:27.643 { 00:11:27.643 "name": "BaseBdev3", 00:11:27.643 "uuid": "38921f68-a8e8-5ed9-b232-abe277c648ab", 00:11:27.643 "is_configured": true, 00:11:27.643 "data_offset": 2048, 00:11:27.643 "data_size": 63488 00:11:27.643 }, 00:11:27.643 { 00:11:27.643 "name": "BaseBdev4", 00:11:27.643 "uuid": "258d340f-57d8-529b-9614-36676632fd28", 00:11:27.643 "is_configured": true, 00:11:27.643 "data_offset": 2048, 00:11:27.643 "data_size": 63488 00:11:27.643 } 00:11:27.643 ] 00:11:27.643 }' 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.643 02:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.213 02:44:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.213 02:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.213 02:44:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.213 [2024-12-07 02:44:38.997201] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.213 [2024-12-07 02:44:38.997312] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.213 [2024-12-07 02:44:38.999718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.213 [2024-12-07 02:44:38.999816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.213 [2024-12-07 02:44:38.999956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.213 [2024-12-07 02:44:39.000033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state offline 00:11:28.213 { 00:11:28.213 "results": [ 00:11:28.213 { 00:11:28.213 "job": "raid_bdev1", 00:11:28.213 "core_mask": "0x1", 00:11:28.213 "workload": "randrw", 00:11:28.213 "percentage": 50, 00:11:28.213 "status": "finished", 00:11:28.213 "queue_depth": 1, 00:11:28.213 "io_size": 131072, 00:11:28.213 "runtime": 1.347105, 00:11:28.213 "iops": 9593.164601126118, 00:11:28.213 "mibps": 1199.1455751407648, 00:11:28.213 "io_failed": 0, 00:11:28.213 "io_timeout": 0, 00:11:28.213 "avg_latency_us": 101.80184397541771, 00:11:28.213 "min_latency_us": 22.246288209606988, 00:11:28.213 "max_latency_us": 1402.2986899563318 00:11:28.213 } 00:11:28.213 ], 00:11:28.213 "core_count": 1 00:11:28.213 } 00:11:28.213 02:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.213 02:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 86128 00:11:28.213 02:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 86128 ']' 00:11:28.213 02:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 86128 00:11:28.213 02:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:28.213 02:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:28.213 02:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86128 00:11:28.213 killing process with pid 86128 00:11:28.213 02:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:28.213 02:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:28.213 02:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86128' 00:11:28.213 02:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 86128 00:11:28.213 [2024-12-07 02:44:39.046675] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.213 02:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 86128 00:11:28.213 [2024-12-07 02:44:39.111072] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:28.473 02:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:28.473 02:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0SSsqGZ3In 00:11:28.473 02:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:28.473 02:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:28.473 02:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:28.473 02:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:28.473 02:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:28.473 02:44:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:28.473 00:11:28.473 real 0m3.498s 00:11:28.473 user 0m4.217s 00:11:28.473 sys 0m0.656s 00:11:28.473 02:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.473 ************************************ 00:11:28.473 END TEST raid_write_error_test 00:11:28.473 ************************************ 00:11:28.473 02:44:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.473 02:44:39 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:28.473 02:44:39 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:28.473 02:44:39 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:28.733 02:44:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:28.733 02:44:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.733 02:44:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:28.733 ************************************ 00:11:28.733 START TEST raid_rebuild_test 00:11:28.733 ************************************ 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86260 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86260 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 86260 ']' 00:11:28.733 02:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.734 02:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.734 02:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.734 02:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.734 02:44:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.734 [2024-12-07 02:44:39.661256] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:28.734 [2024-12-07 02:44:39.661455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:28.734 Zero copy mechanism will not be used. 00:11:28.734 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86260 ] 00:11:28.993 [2024-12-07 02:44:39.820799] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.993 [2024-12-07 02:44:39.892831] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.993 [2024-12-07 02:44:39.969270] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.993 [2024-12-07 02:44:39.969414] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.564 BaseBdev1_malloc 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.564 [2024-12-07 02:44:40.511575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:29.564 [2024-12-07 02:44:40.511679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.564 [2024-12-07 02:44:40.511710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:29.564 [2024-12-07 02:44:40.511726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.564 [2024-12-07 02:44:40.514109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.564 [2024-12-07 02:44:40.514147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:29.564 BaseBdev1 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.564 BaseBdev2_malloc 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.564 [2024-12-07 02:44:40.561754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:29.564 [2024-12-07 02:44:40.561927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.564 [2024-12-07 02:44:40.561978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:29.564 [2024-12-07 02:44:40.561999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.564 [2024-12-07 02:44:40.566485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.564 [2024-12-07 02:44:40.566549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:29.564 BaseBdev2 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.564 spare_malloc 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.564 spare_delay 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.564 [2024-12-07 02:44:40.609740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:29.564 [2024-12-07 02:44:40.609791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.564 [2024-12-07 02:44:40.609813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:29.564 [2024-12-07 02:44:40.609821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.564 [2024-12-07 02:44:40.612103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.564 [2024-12-07 02:44:40.612137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:29.564 spare 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.564 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.564 [2024-12-07 02:44:40.621754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.564 [2024-12-07 02:44:40.623788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.565 [2024-12-07 02:44:40.623872] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:29.565 [2024-12-07 02:44:40.623889] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:29.565 [2024-12-07 02:44:40.624155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:29.565 [2024-12-07 02:44:40.624280] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:29.565 [2024-12-07 02:44:40.624293] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:29.565 [2024-12-07 02:44:40.624418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.565 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.825 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.825 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.825 "name": "raid_bdev1", 00:11:29.825 "uuid": "241ef6b4-4e1c-432a-8c09-ef2b6f0d4000", 00:11:29.825 "strip_size_kb": 0, 00:11:29.825 "state": "online", 00:11:29.825 "raid_level": "raid1", 00:11:29.825 "superblock": false, 00:11:29.825 "num_base_bdevs": 2, 00:11:29.825 "num_base_bdevs_discovered": 2, 00:11:29.825 "num_base_bdevs_operational": 2, 00:11:29.825 "base_bdevs_list": [ 00:11:29.825 { 00:11:29.825 "name": "BaseBdev1", 00:11:29.825 "uuid": "285bffd9-d0bc-5631-ab46-5e3b263484fa", 00:11:29.825 "is_configured": true, 00:11:29.825 "data_offset": 0, 00:11:29.825 "data_size": 65536 00:11:29.825 }, 00:11:29.825 { 00:11:29.825 "name": "BaseBdev2", 00:11:29.825 "uuid": "88414d8c-b40f-5e67-8546-a347abb4b244", 00:11:29.825 "is_configured": true, 00:11:29.825 "data_offset": 0, 00:11:29.825 "data_size": 65536 00:11:29.825 } 00:11:29.825 ] 00:11:29.825 }' 00:11:29.825 02:44:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.825 02:44:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.085 [2024-12-07 02:44:41.045287] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:30.085 02:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:30.345 [2024-12-07 02:44:41.296718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:30.345 /dev/nbd0 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.345 1+0 records in 00:11:30.345 1+0 records out 00:11:30.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286976 s, 14.3 MB/s 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:30.345 02:44:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:34.537 65536+0 records in 00:11:34.537 65536+0 records out 00:11:34.537 33554432 bytes (34 MB, 32 MiB) copied, 3.80011 s, 8.8 MB/s 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:34.537 [2024-12-07 02:44:45.391765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.537 [2024-12-07 02:44:45.407855] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.537 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.538 "name": "raid_bdev1", 00:11:34.538 "uuid": "241ef6b4-4e1c-432a-8c09-ef2b6f0d4000", 00:11:34.538 "strip_size_kb": 0, 00:11:34.538 "state": "online", 00:11:34.538 "raid_level": "raid1", 00:11:34.538 "superblock": false, 00:11:34.538 "num_base_bdevs": 2, 00:11:34.538 "num_base_bdevs_discovered": 1, 00:11:34.538 "num_base_bdevs_operational": 1, 00:11:34.538 "base_bdevs_list": [ 00:11:34.538 { 00:11:34.538 "name": null, 00:11:34.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.538 "is_configured": false, 00:11:34.538 "data_offset": 0, 00:11:34.538 "data_size": 65536 00:11:34.538 }, 00:11:34.538 { 00:11:34.538 "name": "BaseBdev2", 00:11:34.538 "uuid": "88414d8c-b40f-5e67-8546-a347abb4b244", 00:11:34.538 "is_configured": true, 00:11:34.538 "data_offset": 0, 00:11:34.538 "data_size": 65536 00:11:34.538 } 00:11:34.538 ] 00:11:34.538 }' 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.538 02:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.797 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:34.797 02:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.797 02:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.797 [2024-12-07 02:44:45.859129] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:34.797 [2024-12-07 02:44:45.866450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:11:34.797 02:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.797 02:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:34.797 [2024-12-07 02:44:45.868721] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.175 "name": "raid_bdev1", 00:11:36.175 "uuid": "241ef6b4-4e1c-432a-8c09-ef2b6f0d4000", 00:11:36.175 "strip_size_kb": 0, 00:11:36.175 "state": "online", 00:11:36.175 "raid_level": "raid1", 00:11:36.175 "superblock": false, 00:11:36.175 "num_base_bdevs": 2, 00:11:36.175 "num_base_bdevs_discovered": 2, 00:11:36.175 "num_base_bdevs_operational": 2, 00:11:36.175 "process": { 00:11:36.175 "type": "rebuild", 00:11:36.175 "target": "spare", 00:11:36.175 "progress": { 00:11:36.175 "blocks": 20480, 00:11:36.175 "percent": 31 00:11:36.175 } 00:11:36.175 }, 00:11:36.175 "base_bdevs_list": [ 00:11:36.175 { 00:11:36.175 "name": "spare", 00:11:36.175 "uuid": "9839cabf-cf39-5795-9204-cff5b75e4378", 00:11:36.175 "is_configured": true, 00:11:36.175 "data_offset": 0, 00:11:36.175 "data_size": 65536 00:11:36.175 }, 00:11:36.175 { 00:11:36.175 "name": "BaseBdev2", 00:11:36.175 "uuid": "88414d8c-b40f-5e67-8546-a347abb4b244", 00:11:36.175 "is_configured": true, 00:11:36.175 "data_offset": 0, 00:11:36.175 "data_size": 65536 00:11:36.175 } 00:11:36.175 ] 00:11:36.175 }' 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.175 02:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.175 [2024-12-07 02:44:47.004603] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:36.175 [2024-12-07 02:44:47.077271] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:36.175 [2024-12-07 02:44:47.077331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.175 [2024-12-07 02:44:47.077351] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:36.175 [2024-12-07 02:44:47.077358] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.175 "name": "raid_bdev1", 00:11:36.175 "uuid": "241ef6b4-4e1c-432a-8c09-ef2b6f0d4000", 00:11:36.175 "strip_size_kb": 0, 00:11:36.175 "state": "online", 00:11:36.175 "raid_level": "raid1", 00:11:36.175 "superblock": false, 00:11:36.175 "num_base_bdevs": 2, 00:11:36.175 "num_base_bdevs_discovered": 1, 00:11:36.175 "num_base_bdevs_operational": 1, 00:11:36.175 "base_bdevs_list": [ 00:11:36.175 { 00:11:36.175 "name": null, 00:11:36.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.175 "is_configured": false, 00:11:36.175 "data_offset": 0, 00:11:36.175 "data_size": 65536 00:11:36.175 }, 00:11:36.175 { 00:11:36.175 "name": "BaseBdev2", 00:11:36.175 "uuid": "88414d8c-b40f-5e67-8546-a347abb4b244", 00:11:36.175 "is_configured": true, 00:11:36.175 "data_offset": 0, 00:11:36.175 "data_size": 65536 00:11:36.175 } 00:11:36.175 ] 00:11:36.175 }' 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.175 02:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.743 "name": "raid_bdev1", 00:11:36.743 "uuid": "241ef6b4-4e1c-432a-8c09-ef2b6f0d4000", 00:11:36.743 "strip_size_kb": 0, 00:11:36.743 "state": "online", 00:11:36.743 "raid_level": "raid1", 00:11:36.743 "superblock": false, 00:11:36.743 "num_base_bdevs": 2, 00:11:36.743 "num_base_bdevs_discovered": 1, 00:11:36.743 "num_base_bdevs_operational": 1, 00:11:36.743 "base_bdevs_list": [ 00:11:36.743 { 00:11:36.743 "name": null, 00:11:36.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.743 "is_configured": false, 00:11:36.743 "data_offset": 0, 00:11:36.743 "data_size": 65536 00:11:36.743 }, 00:11:36.743 { 00:11:36.743 "name": "BaseBdev2", 00:11:36.743 "uuid": "88414d8c-b40f-5e67-8546-a347abb4b244", 00:11:36.743 "is_configured": true, 00:11:36.743 "data_offset": 0, 00:11:36.743 "data_size": 65536 00:11:36.743 } 00:11:36.743 ] 00:11:36.743 }' 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.743 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:36.744 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:36.744 02:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.744 02:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.744 [2024-12-07 02:44:47.695864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:36.744 [2024-12-07 02:44:47.703151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:11:36.744 02:44:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.744 02:44:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:36.744 [2024-12-07 02:44:47.705268] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:37.691 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:37.691 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.691 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:37.691 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:37.691 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.691 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.691 02:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.691 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.691 02:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.691 02:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.691 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.691 "name": "raid_bdev1", 00:11:37.691 "uuid": "241ef6b4-4e1c-432a-8c09-ef2b6f0d4000", 00:11:37.691 "strip_size_kb": 0, 00:11:37.691 "state": "online", 00:11:37.691 "raid_level": "raid1", 00:11:37.691 "superblock": false, 00:11:37.691 "num_base_bdevs": 2, 00:11:37.691 "num_base_bdevs_discovered": 2, 00:11:37.691 "num_base_bdevs_operational": 2, 00:11:37.691 "process": { 00:11:37.691 "type": "rebuild", 00:11:37.691 "target": "spare", 00:11:37.691 "progress": { 00:11:37.691 "blocks": 20480, 00:11:37.691 "percent": 31 00:11:37.691 } 00:11:37.691 }, 00:11:37.691 "base_bdevs_list": [ 00:11:37.691 { 00:11:37.691 "name": "spare", 00:11:37.691 "uuid": "9839cabf-cf39-5795-9204-cff5b75e4378", 00:11:37.691 "is_configured": true, 00:11:37.691 "data_offset": 0, 00:11:37.691 "data_size": 65536 00:11:37.691 }, 00:11:37.691 { 00:11:37.691 "name": "BaseBdev2", 00:11:37.691 "uuid": "88414d8c-b40f-5e67-8546-a347abb4b244", 00:11:37.691 "is_configured": true, 00:11:37.691 "data_offset": 0, 00:11:37.691 "data_size": 65536 00:11:37.691 } 00:11:37.691 ] 00:11:37.691 }' 00:11:37.691 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=301 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.950 "name": "raid_bdev1", 00:11:37.950 "uuid": "241ef6b4-4e1c-432a-8c09-ef2b6f0d4000", 00:11:37.950 "strip_size_kb": 0, 00:11:37.950 "state": "online", 00:11:37.950 "raid_level": "raid1", 00:11:37.950 "superblock": false, 00:11:37.950 "num_base_bdevs": 2, 00:11:37.950 "num_base_bdevs_discovered": 2, 00:11:37.950 "num_base_bdevs_operational": 2, 00:11:37.950 "process": { 00:11:37.950 "type": "rebuild", 00:11:37.950 "target": "spare", 00:11:37.950 "progress": { 00:11:37.950 "blocks": 22528, 00:11:37.950 "percent": 34 00:11:37.950 } 00:11:37.950 }, 00:11:37.950 "base_bdevs_list": [ 00:11:37.950 { 00:11:37.950 "name": "spare", 00:11:37.950 "uuid": "9839cabf-cf39-5795-9204-cff5b75e4378", 00:11:37.950 "is_configured": true, 00:11:37.950 "data_offset": 0, 00:11:37.950 "data_size": 65536 00:11:37.950 }, 00:11:37.950 { 00:11:37.950 "name": "BaseBdev2", 00:11:37.950 "uuid": "88414d8c-b40f-5e67-8546-a347abb4b244", 00:11:37.950 "is_configured": true, 00:11:37.950 "data_offset": 0, 00:11:37.950 "data_size": 65536 00:11:37.950 } 00:11:37.950 ] 00:11:37.950 }' 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:37.950 02:44:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.950 02:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:37.950 02:44:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:39.338 "name": "raid_bdev1", 00:11:39.338 "uuid": "241ef6b4-4e1c-432a-8c09-ef2b6f0d4000", 00:11:39.338 "strip_size_kb": 0, 00:11:39.338 "state": "online", 00:11:39.338 "raid_level": "raid1", 00:11:39.338 "superblock": false, 00:11:39.338 "num_base_bdevs": 2, 00:11:39.338 "num_base_bdevs_discovered": 2, 00:11:39.338 "num_base_bdevs_operational": 2, 00:11:39.338 "process": { 00:11:39.338 "type": "rebuild", 00:11:39.338 "target": "spare", 00:11:39.338 "progress": { 00:11:39.338 "blocks": 47104, 00:11:39.338 "percent": 71 00:11:39.338 } 00:11:39.338 }, 00:11:39.338 "base_bdevs_list": [ 00:11:39.338 { 00:11:39.338 "name": "spare", 00:11:39.338 "uuid": "9839cabf-cf39-5795-9204-cff5b75e4378", 00:11:39.338 "is_configured": true, 00:11:39.338 "data_offset": 0, 00:11:39.338 "data_size": 65536 00:11:39.338 }, 00:11:39.338 { 00:11:39.338 "name": "BaseBdev2", 00:11:39.338 "uuid": "88414d8c-b40f-5e67-8546-a347abb4b244", 00:11:39.338 "is_configured": true, 00:11:39.338 "data_offset": 0, 00:11:39.338 "data_size": 65536 00:11:39.338 } 00:11:39.338 ] 00:11:39.338 }' 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:39.338 02:44:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:39.907 [2024-12-07 02:44:50.926044] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:39.907 [2024-12-07 02:44:50.926178] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:39.907 [2024-12-07 02:44:50.926236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.167 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:40.167 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:40.167 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.167 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:40.167 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:40.167 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.167 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.167 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.167 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.167 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.167 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.167 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.167 "name": "raid_bdev1", 00:11:40.167 "uuid": "241ef6b4-4e1c-432a-8c09-ef2b6f0d4000", 00:11:40.167 "strip_size_kb": 0, 00:11:40.167 "state": "online", 00:11:40.167 "raid_level": "raid1", 00:11:40.167 "superblock": false, 00:11:40.167 "num_base_bdevs": 2, 00:11:40.167 "num_base_bdevs_discovered": 2, 00:11:40.167 "num_base_bdevs_operational": 2, 00:11:40.167 "base_bdevs_list": [ 00:11:40.167 { 00:11:40.167 "name": "spare", 00:11:40.167 "uuid": "9839cabf-cf39-5795-9204-cff5b75e4378", 00:11:40.167 "is_configured": true, 00:11:40.167 "data_offset": 0, 00:11:40.167 "data_size": 65536 00:11:40.167 }, 00:11:40.167 { 00:11:40.167 "name": "BaseBdev2", 00:11:40.167 "uuid": "88414d8c-b40f-5e67-8546-a347abb4b244", 00:11:40.167 "is_configured": true, 00:11:40.167 "data_offset": 0, 00:11:40.167 "data_size": 65536 00:11:40.167 } 00:11:40.167 ] 00:11:40.167 }' 00:11:40.167 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.427 "name": "raid_bdev1", 00:11:40.427 "uuid": "241ef6b4-4e1c-432a-8c09-ef2b6f0d4000", 00:11:40.427 "strip_size_kb": 0, 00:11:40.427 "state": "online", 00:11:40.427 "raid_level": "raid1", 00:11:40.427 "superblock": false, 00:11:40.427 "num_base_bdevs": 2, 00:11:40.427 "num_base_bdevs_discovered": 2, 00:11:40.427 "num_base_bdevs_operational": 2, 00:11:40.427 "base_bdevs_list": [ 00:11:40.427 { 00:11:40.427 "name": "spare", 00:11:40.427 "uuid": "9839cabf-cf39-5795-9204-cff5b75e4378", 00:11:40.427 "is_configured": true, 00:11:40.427 "data_offset": 0, 00:11:40.427 "data_size": 65536 00:11:40.427 }, 00:11:40.427 { 00:11:40.427 "name": "BaseBdev2", 00:11:40.427 "uuid": "88414d8c-b40f-5e67-8546-a347abb4b244", 00:11:40.427 "is_configured": true, 00:11:40.427 "data_offset": 0, 00:11:40.427 "data_size": 65536 00:11:40.427 } 00:11:40.427 ] 00:11:40.427 }' 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.427 "name": "raid_bdev1", 00:11:40.427 "uuid": "241ef6b4-4e1c-432a-8c09-ef2b6f0d4000", 00:11:40.427 "strip_size_kb": 0, 00:11:40.427 "state": "online", 00:11:40.427 "raid_level": "raid1", 00:11:40.427 "superblock": false, 00:11:40.427 "num_base_bdevs": 2, 00:11:40.427 "num_base_bdevs_discovered": 2, 00:11:40.427 "num_base_bdevs_operational": 2, 00:11:40.427 "base_bdevs_list": [ 00:11:40.427 { 00:11:40.427 "name": "spare", 00:11:40.427 "uuid": "9839cabf-cf39-5795-9204-cff5b75e4378", 00:11:40.427 "is_configured": true, 00:11:40.427 "data_offset": 0, 00:11:40.427 "data_size": 65536 00:11:40.427 }, 00:11:40.427 { 00:11:40.427 "name": "BaseBdev2", 00:11:40.427 "uuid": "88414d8c-b40f-5e67-8546-a347abb4b244", 00:11:40.427 "is_configured": true, 00:11:40.427 "data_offset": 0, 00:11:40.427 "data_size": 65536 00:11:40.427 } 00:11:40.427 ] 00:11:40.427 }' 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.427 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.998 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:40.998 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.998 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.998 [2024-12-07 02:44:51.892505] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.998 [2024-12-07 02:44:51.892619] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.998 [2024-12-07 02:44:51.892743] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.998 [2024-12-07 02:44:51.892850] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.998 [2024-12-07 02:44:51.892948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:40.998 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.998 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.998 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.998 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.998 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:40.998 02:44:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.998 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:40.998 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:40.998 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:40.999 02:44:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:40.999 02:44:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:40.999 02:44:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:40.999 02:44:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:40.999 02:44:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:40.999 02:44:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:40.999 02:44:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:40.999 02:44:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:40.999 02:44:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:40.999 02:44:51 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:41.258 /dev/nbd0 00:11:41.258 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:41.258 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:41.258 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:41.258 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:41.258 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:41.258 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:41.258 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:41.258 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:41.258 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:41.258 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:41.259 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:41.259 1+0 records in 00:11:41.259 1+0 records out 00:11:41.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540722 s, 7.6 MB/s 00:11:41.259 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:41.259 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:41.259 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:41.259 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:41.259 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:41.259 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:41.259 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:41.259 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:41.521 /dev/nbd1 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:41.521 1+0 records in 00:11:41.521 1+0 records out 00:11:41.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024193 s, 16.9 MB/s 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:41.521 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:41.787 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:41.787 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:41.787 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:41.787 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:41.787 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:41.787 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:41.787 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:41.787 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:41.787 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:41.787 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86260 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 86260 ']' 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 86260 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86260 00:11:42.070 killing process with pid 86260 00:11:42.070 Received shutdown signal, test time was about 60.000000 seconds 00:11:42.070 00:11:42.070 Latency(us) 00:11:42.070 [2024-12-07T02:44:53.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.070 [2024-12-07T02:44:53.148Z] =================================================================================================================== 00:11:42.070 [2024-12-07T02:44:53.148Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86260' 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 86260 00:11:42.070 [2024-12-07 02:44:52.993149] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.070 02:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 86260 00:11:42.070 [2024-12-07 02:44:53.049325] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:42.647 00:11:42.647 real 0m13.851s 00:11:42.647 user 0m15.769s 00:11:42.647 sys 0m3.039s 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:42.647 ************************************ 00:11:42.647 END TEST raid_rebuild_test 00:11:42.647 ************************************ 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.647 02:44:53 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:42.647 02:44:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:42.647 02:44:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:42.647 02:44:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:42.647 ************************************ 00:11:42.647 START TEST raid_rebuild_test_sb 00:11:42.647 ************************************ 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86665 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86665 00:11:42.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86665 ']' 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:42.647 02:44:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.647 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:42.647 Zero copy mechanism will not be used. 00:11:42.647 [2024-12-07 02:44:53.582744] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:11:42.647 [2024-12-07 02:44:53.582872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86665 ] 00:11:42.906 [2024-12-07 02:44:53.742479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.906 [2024-12-07 02:44:53.813334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.906 [2024-12-07 02:44:53.889750] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.906 [2024-12-07 02:44:53.889886] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.476 BaseBdev1_malloc 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.476 [2024-12-07 02:44:54.440902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:43.476 [2024-12-07 02:44:54.440976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.476 [2024-12-07 02:44:54.441006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:43.476 [2024-12-07 02:44:54.441031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.476 [2024-12-07 02:44:54.443497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.476 [2024-12-07 02:44:54.443573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:43.476 BaseBdev1 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.476 BaseBdev2_malloc 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.476 [2024-12-07 02:44:54.485032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:43.476 [2024-12-07 02:44:54.485088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.476 [2024-12-07 02:44:54.485112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:43.476 [2024-12-07 02:44:54.485122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.476 [2024-12-07 02:44:54.487625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.476 [2024-12-07 02:44:54.487670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:43.476 BaseBdev2 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.476 spare_malloc 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.476 spare_delay 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.476 [2024-12-07 02:44:54.532078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:43.476 [2024-12-07 02:44:54.532130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.476 [2024-12-07 02:44:54.532151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:43.476 [2024-12-07 02:44:54.532160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.476 [2024-12-07 02:44:54.534563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.476 [2024-12-07 02:44:54.534612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:43.476 spare 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.476 [2024-12-07 02:44:54.544104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:43.476 [2024-12-07 02:44:54.546165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.476 [2024-12-07 02:44:54.546310] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:11:43.476 [2024-12-07 02:44:54.546322] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:43.476 [2024-12-07 02:44:54.546588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:11:43.476 [2024-12-07 02:44:54.546746] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:11:43.476 [2024-12-07 02:44:54.546759] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:11:43.476 [2024-12-07 02:44:54.546871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.476 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.736 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.736 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.736 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.736 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.736 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.736 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.736 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.736 "name": "raid_bdev1", 00:11:43.736 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:43.736 "strip_size_kb": 0, 00:11:43.736 "state": "online", 00:11:43.736 "raid_level": "raid1", 00:11:43.736 "superblock": true, 00:11:43.736 "num_base_bdevs": 2, 00:11:43.736 "num_base_bdevs_discovered": 2, 00:11:43.736 "num_base_bdevs_operational": 2, 00:11:43.736 "base_bdevs_list": [ 00:11:43.736 { 00:11:43.736 "name": "BaseBdev1", 00:11:43.736 "uuid": "926263a7-2abe-528b-9770-9814fdef90fe", 00:11:43.736 "is_configured": true, 00:11:43.736 "data_offset": 2048, 00:11:43.736 "data_size": 63488 00:11:43.736 }, 00:11:43.736 { 00:11:43.736 "name": "BaseBdev2", 00:11:43.736 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:43.736 "is_configured": true, 00:11:43.736 "data_offset": 2048, 00:11:43.736 "data_size": 63488 00:11:43.736 } 00:11:43.736 ] 00:11:43.736 }' 00:11:43.736 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.736 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.996 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:43.996 02:44:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:43.996 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.996 02:44:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.996 [2024-12-07 02:44:55.003853] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.996 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.996 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:43.996 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.996 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:43.996 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.996 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.996 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:44.255 [2024-12-07 02:44:55.263023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:44.255 /dev/nbd0 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:44.255 1+0 records in 00:11:44.255 1+0 records out 00:11:44.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355433 s, 11.5 MB/s 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:44.255 02:44:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:48.452 63488+0 records in 00:11:48.452 63488+0 records out 00:11:48.452 32505856 bytes (33 MB, 31 MiB) copied, 4.01953 s, 8.1 MB/s 00:11:48.452 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:48.452 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:48.452 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:48.452 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:48.452 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:48.452 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.452 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:48.712 [2024-12-07 02:44:59.539236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.712 [2024-12-07 02:44:59.573691] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.712 "name": "raid_bdev1", 00:11:48.712 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:48.712 "strip_size_kb": 0, 00:11:48.712 "state": "online", 00:11:48.712 "raid_level": "raid1", 00:11:48.712 "superblock": true, 00:11:48.712 "num_base_bdevs": 2, 00:11:48.712 "num_base_bdevs_discovered": 1, 00:11:48.712 "num_base_bdevs_operational": 1, 00:11:48.712 "base_bdevs_list": [ 00:11:48.712 { 00:11:48.712 "name": null, 00:11:48.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.712 "is_configured": false, 00:11:48.712 "data_offset": 0, 00:11:48.712 "data_size": 63488 00:11:48.712 }, 00:11:48.712 { 00:11:48.712 "name": "BaseBdev2", 00:11:48.712 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:48.712 "is_configured": true, 00:11:48.712 "data_offset": 2048, 00:11:48.712 "data_size": 63488 00:11:48.712 } 00:11:48.712 ] 00:11:48.712 }' 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.712 02:44:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.972 02:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:48.972 02:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.972 02:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.972 [2024-12-07 02:45:00.012905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:48.972 [2024-12-07 02:45:00.020180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:11:48.972 02:45:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.972 02:45:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:48.972 [2024-12-07 02:45:00.022349] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.353 "name": "raid_bdev1", 00:11:50.353 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:50.353 "strip_size_kb": 0, 00:11:50.353 "state": "online", 00:11:50.353 "raid_level": "raid1", 00:11:50.353 "superblock": true, 00:11:50.353 "num_base_bdevs": 2, 00:11:50.353 "num_base_bdevs_discovered": 2, 00:11:50.353 "num_base_bdevs_operational": 2, 00:11:50.353 "process": { 00:11:50.353 "type": "rebuild", 00:11:50.353 "target": "spare", 00:11:50.353 "progress": { 00:11:50.353 "blocks": 20480, 00:11:50.353 "percent": 32 00:11:50.353 } 00:11:50.353 }, 00:11:50.353 "base_bdevs_list": [ 00:11:50.353 { 00:11:50.353 "name": "spare", 00:11:50.353 "uuid": "a087439f-e461-5e76-98a7-4cdc78ec1566", 00:11:50.353 "is_configured": true, 00:11:50.353 "data_offset": 2048, 00:11:50.353 "data_size": 63488 00:11:50.353 }, 00:11:50.353 { 00:11:50.353 "name": "BaseBdev2", 00:11:50.353 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:50.353 "is_configured": true, 00:11:50.353 "data_offset": 2048, 00:11:50.353 "data_size": 63488 00:11:50.353 } 00:11:50.353 ] 00:11:50.353 }' 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.353 [2024-12-07 02:45:01.186717] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.353 [2024-12-07 02:45:01.230625] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:50.353 [2024-12-07 02:45:01.230679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.353 [2024-12-07 02:45:01.230700] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.353 [2024-12-07 02:45:01.230707] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.353 "name": "raid_bdev1", 00:11:50.353 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:50.353 "strip_size_kb": 0, 00:11:50.353 "state": "online", 00:11:50.353 "raid_level": "raid1", 00:11:50.353 "superblock": true, 00:11:50.353 "num_base_bdevs": 2, 00:11:50.353 "num_base_bdevs_discovered": 1, 00:11:50.353 "num_base_bdevs_operational": 1, 00:11:50.353 "base_bdevs_list": [ 00:11:50.353 { 00:11:50.353 "name": null, 00:11:50.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.353 "is_configured": false, 00:11:50.353 "data_offset": 0, 00:11:50.353 "data_size": 63488 00:11:50.353 }, 00:11:50.353 { 00:11:50.353 "name": "BaseBdev2", 00:11:50.353 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:50.353 "is_configured": true, 00:11:50.353 "data_offset": 2048, 00:11:50.353 "data_size": 63488 00:11:50.353 } 00:11:50.353 ] 00:11:50.353 }' 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.353 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.923 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:50.923 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.923 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:50.923 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:50.923 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.923 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.923 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.923 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.923 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.923 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.923 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.923 "name": "raid_bdev1", 00:11:50.923 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:50.923 "strip_size_kb": 0, 00:11:50.923 "state": "online", 00:11:50.923 "raid_level": "raid1", 00:11:50.923 "superblock": true, 00:11:50.923 "num_base_bdevs": 2, 00:11:50.923 "num_base_bdevs_discovered": 1, 00:11:50.923 "num_base_bdevs_operational": 1, 00:11:50.923 "base_bdevs_list": [ 00:11:50.923 { 00:11:50.923 "name": null, 00:11:50.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.923 "is_configured": false, 00:11:50.923 "data_offset": 0, 00:11:50.923 "data_size": 63488 00:11:50.923 }, 00:11:50.923 { 00:11:50.924 "name": "BaseBdev2", 00:11:50.924 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:50.924 "is_configured": true, 00:11:50.924 "data_offset": 2048, 00:11:50.924 "data_size": 63488 00:11:50.924 } 00:11:50.924 ] 00:11:50.924 }' 00:11:50.924 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.924 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:50.924 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.924 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:50.924 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:50.924 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.924 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.924 [2024-12-07 02:45:01.844864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:50.924 [2024-12-07 02:45:01.851851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:11:50.924 02:45:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.924 02:45:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:50.924 [2024-12-07 02:45:01.853969] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:51.863 02:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.863 02:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.863 02:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.863 02:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.863 02:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.863 02:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.863 02:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.863 02:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.863 02:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.863 02:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.863 02:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.863 "name": "raid_bdev1", 00:11:51.863 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:51.863 "strip_size_kb": 0, 00:11:51.863 "state": "online", 00:11:51.863 "raid_level": "raid1", 00:11:51.863 "superblock": true, 00:11:51.863 "num_base_bdevs": 2, 00:11:51.863 "num_base_bdevs_discovered": 2, 00:11:51.863 "num_base_bdevs_operational": 2, 00:11:51.863 "process": { 00:11:51.863 "type": "rebuild", 00:11:51.863 "target": "spare", 00:11:51.863 "progress": { 00:11:51.863 "blocks": 20480, 00:11:51.863 "percent": 32 00:11:51.863 } 00:11:51.863 }, 00:11:51.863 "base_bdevs_list": [ 00:11:51.863 { 00:11:51.863 "name": "spare", 00:11:51.863 "uuid": "a087439f-e461-5e76-98a7-4cdc78ec1566", 00:11:51.863 "is_configured": true, 00:11:51.863 "data_offset": 2048, 00:11:51.863 "data_size": 63488 00:11:51.863 }, 00:11:51.863 { 00:11:51.863 "name": "BaseBdev2", 00:11:51.863 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:51.863 "is_configured": true, 00:11:51.863 "data_offset": 2048, 00:11:51.863 "data_size": 63488 00:11:51.863 } 00:11:51.863 ] 00:11:51.863 }' 00:11:51.863 02:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.123 02:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.123 02:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:52.123 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=316 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.123 "name": "raid_bdev1", 00:11:52.123 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:52.123 "strip_size_kb": 0, 00:11:52.123 "state": "online", 00:11:52.123 "raid_level": "raid1", 00:11:52.123 "superblock": true, 00:11:52.123 "num_base_bdevs": 2, 00:11:52.123 "num_base_bdevs_discovered": 2, 00:11:52.123 "num_base_bdevs_operational": 2, 00:11:52.123 "process": { 00:11:52.123 "type": "rebuild", 00:11:52.123 "target": "spare", 00:11:52.123 "progress": { 00:11:52.123 "blocks": 22528, 00:11:52.123 "percent": 35 00:11:52.123 } 00:11:52.123 }, 00:11:52.123 "base_bdevs_list": [ 00:11:52.123 { 00:11:52.123 "name": "spare", 00:11:52.123 "uuid": "a087439f-e461-5e76-98a7-4cdc78ec1566", 00:11:52.123 "is_configured": true, 00:11:52.123 "data_offset": 2048, 00:11:52.123 "data_size": 63488 00:11:52.123 }, 00:11:52.123 { 00:11:52.123 "name": "BaseBdev2", 00:11:52.123 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:52.123 "is_configured": true, 00:11:52.123 "data_offset": 2048, 00:11:52.123 "data_size": 63488 00:11:52.123 } 00:11:52.123 ] 00:11:52.123 }' 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.123 02:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.522 "name": "raid_bdev1", 00:11:53.522 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:53.522 "strip_size_kb": 0, 00:11:53.522 "state": "online", 00:11:53.522 "raid_level": "raid1", 00:11:53.522 "superblock": true, 00:11:53.522 "num_base_bdevs": 2, 00:11:53.522 "num_base_bdevs_discovered": 2, 00:11:53.522 "num_base_bdevs_operational": 2, 00:11:53.522 "process": { 00:11:53.522 "type": "rebuild", 00:11:53.522 "target": "spare", 00:11:53.522 "progress": { 00:11:53.522 "blocks": 45056, 00:11:53.522 "percent": 70 00:11:53.522 } 00:11:53.522 }, 00:11:53.522 "base_bdevs_list": [ 00:11:53.522 { 00:11:53.522 "name": "spare", 00:11:53.522 "uuid": "a087439f-e461-5e76-98a7-4cdc78ec1566", 00:11:53.522 "is_configured": true, 00:11:53.522 "data_offset": 2048, 00:11:53.522 "data_size": 63488 00:11:53.522 }, 00:11:53.522 { 00:11:53.522 "name": "BaseBdev2", 00:11:53.522 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:53.522 "is_configured": true, 00:11:53.522 "data_offset": 2048, 00:11:53.522 "data_size": 63488 00:11:53.522 } 00:11:53.522 ] 00:11:53.522 }' 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:53.522 02:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:54.092 [2024-12-07 02:45:04.973824] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:54.092 [2024-12-07 02:45:04.974017] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:54.092 [2024-12-07 02:45:04.974164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.352 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:54.352 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.353 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.353 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.353 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.353 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.353 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.353 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.353 02:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.353 02:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.353 02:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.353 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.353 "name": "raid_bdev1", 00:11:54.353 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:54.353 "strip_size_kb": 0, 00:11:54.353 "state": "online", 00:11:54.353 "raid_level": "raid1", 00:11:54.353 "superblock": true, 00:11:54.353 "num_base_bdevs": 2, 00:11:54.353 "num_base_bdevs_discovered": 2, 00:11:54.353 "num_base_bdevs_operational": 2, 00:11:54.353 "base_bdevs_list": [ 00:11:54.353 { 00:11:54.353 "name": "spare", 00:11:54.353 "uuid": "a087439f-e461-5e76-98a7-4cdc78ec1566", 00:11:54.353 "is_configured": true, 00:11:54.353 "data_offset": 2048, 00:11:54.353 "data_size": 63488 00:11:54.353 }, 00:11:54.353 { 00:11:54.353 "name": "BaseBdev2", 00:11:54.353 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:54.353 "is_configured": true, 00:11:54.353 "data_offset": 2048, 00:11:54.353 "data_size": 63488 00:11:54.353 } 00:11:54.353 ] 00:11:54.353 }' 00:11:54.353 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.353 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:54.353 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.611 "name": "raid_bdev1", 00:11:54.611 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:54.611 "strip_size_kb": 0, 00:11:54.611 "state": "online", 00:11:54.611 "raid_level": "raid1", 00:11:54.611 "superblock": true, 00:11:54.611 "num_base_bdevs": 2, 00:11:54.611 "num_base_bdevs_discovered": 2, 00:11:54.611 "num_base_bdevs_operational": 2, 00:11:54.611 "base_bdevs_list": [ 00:11:54.611 { 00:11:54.611 "name": "spare", 00:11:54.611 "uuid": "a087439f-e461-5e76-98a7-4cdc78ec1566", 00:11:54.611 "is_configured": true, 00:11:54.611 "data_offset": 2048, 00:11:54.611 "data_size": 63488 00:11:54.611 }, 00:11:54.611 { 00:11:54.611 "name": "BaseBdev2", 00:11:54.611 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:54.611 "is_configured": true, 00:11:54.611 "data_offset": 2048, 00:11:54.611 "data_size": 63488 00:11:54.611 } 00:11:54.611 ] 00:11:54.611 }' 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.611 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.611 "name": "raid_bdev1", 00:11:54.611 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:54.611 "strip_size_kb": 0, 00:11:54.611 "state": "online", 00:11:54.611 "raid_level": "raid1", 00:11:54.611 "superblock": true, 00:11:54.611 "num_base_bdevs": 2, 00:11:54.611 "num_base_bdevs_discovered": 2, 00:11:54.611 "num_base_bdevs_operational": 2, 00:11:54.611 "base_bdevs_list": [ 00:11:54.611 { 00:11:54.611 "name": "spare", 00:11:54.611 "uuid": "a087439f-e461-5e76-98a7-4cdc78ec1566", 00:11:54.611 "is_configured": true, 00:11:54.611 "data_offset": 2048, 00:11:54.611 "data_size": 63488 00:11:54.611 }, 00:11:54.611 { 00:11:54.611 "name": "BaseBdev2", 00:11:54.611 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:54.611 "is_configured": true, 00:11:54.611 "data_offset": 2048, 00:11:54.612 "data_size": 63488 00:11:54.612 } 00:11:54.612 ] 00:11:54.612 }' 00:11:54.612 02:45:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.612 02:45:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.178 [2024-12-07 02:45:06.047584] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.178 [2024-12-07 02:45:06.047625] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.178 [2024-12-07 02:45:06.047723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.178 [2024-12-07 02:45:06.047803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.178 [2024-12-07 02:45:06.047819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:55.178 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:55.438 /dev/nbd0 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.438 1+0 records in 00:11:55.438 1+0 records out 00:11:55.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585381 s, 7.0 MB/s 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:55.438 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:55.698 /dev/nbd1 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.698 1+0 records in 00:11:55.698 1+0 records out 00:11:55.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451645 s, 9.1 MB/s 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:55.698 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:55.958 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:55.958 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:55.958 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:55.958 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:55.958 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:55.958 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:55.958 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:55.958 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:55.958 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:55.958 02:45:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.218 [2024-12-07 02:45:07.197107] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:56.218 [2024-12-07 02:45:07.197207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.218 [2024-12-07 02:45:07.197233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:56.218 [2024-12-07 02:45:07.197248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.218 [2024-12-07 02:45:07.199779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.218 [2024-12-07 02:45:07.199816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:56.218 [2024-12-07 02:45:07.199906] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:56.218 [2024-12-07 02:45:07.199958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:56.218 [2024-12-07 02:45:07.200086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.218 spare 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.218 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.491 [2024-12-07 02:45:07.299988] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:11:56.491 [2024-12-07 02:45:07.300016] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:56.491 [2024-12-07 02:45:07.300362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:11:56.491 [2024-12-07 02:45:07.300545] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:11:56.491 [2024-12-07 02:45:07.300561] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:11:56.491 [2024-12-07 02:45:07.300755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.491 "name": "raid_bdev1", 00:11:56.491 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:56.491 "strip_size_kb": 0, 00:11:56.491 "state": "online", 00:11:56.491 "raid_level": "raid1", 00:11:56.491 "superblock": true, 00:11:56.491 "num_base_bdevs": 2, 00:11:56.491 "num_base_bdevs_discovered": 2, 00:11:56.491 "num_base_bdevs_operational": 2, 00:11:56.491 "base_bdevs_list": [ 00:11:56.491 { 00:11:56.491 "name": "spare", 00:11:56.491 "uuid": "a087439f-e461-5e76-98a7-4cdc78ec1566", 00:11:56.491 "is_configured": true, 00:11:56.491 "data_offset": 2048, 00:11:56.491 "data_size": 63488 00:11:56.491 }, 00:11:56.491 { 00:11:56.491 "name": "BaseBdev2", 00:11:56.491 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:56.491 "is_configured": true, 00:11:56.491 "data_offset": 2048, 00:11:56.491 "data_size": 63488 00:11:56.491 } 00:11:56.491 ] 00:11:56.491 }' 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.491 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.769 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:56.769 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.769 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:56.769 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:56.769 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.769 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.769 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.769 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.769 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.769 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.769 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.769 "name": "raid_bdev1", 00:11:56.769 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:56.769 "strip_size_kb": 0, 00:11:56.769 "state": "online", 00:11:56.769 "raid_level": "raid1", 00:11:56.769 "superblock": true, 00:11:56.769 "num_base_bdevs": 2, 00:11:56.769 "num_base_bdevs_discovered": 2, 00:11:56.769 "num_base_bdevs_operational": 2, 00:11:56.769 "base_bdevs_list": [ 00:11:56.769 { 00:11:56.769 "name": "spare", 00:11:56.769 "uuid": "a087439f-e461-5e76-98a7-4cdc78ec1566", 00:11:56.769 "is_configured": true, 00:11:56.769 "data_offset": 2048, 00:11:56.769 "data_size": 63488 00:11:56.769 }, 00:11:56.769 { 00:11:56.769 "name": "BaseBdev2", 00:11:56.769 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:56.769 "is_configured": true, 00:11:56.769 "data_offset": 2048, 00:11:56.769 "data_size": 63488 00:11:56.769 } 00:11:56.769 ] 00:11:56.769 }' 00:11:56.769 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.030 [2024-12-07 02:45:07.967792] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.030 02:45:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.030 02:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.030 "name": "raid_bdev1", 00:11:57.030 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:57.030 "strip_size_kb": 0, 00:11:57.030 "state": "online", 00:11:57.030 "raid_level": "raid1", 00:11:57.030 "superblock": true, 00:11:57.030 "num_base_bdevs": 2, 00:11:57.030 "num_base_bdevs_discovered": 1, 00:11:57.030 "num_base_bdevs_operational": 1, 00:11:57.030 "base_bdevs_list": [ 00:11:57.030 { 00:11:57.030 "name": null, 00:11:57.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.030 "is_configured": false, 00:11:57.030 "data_offset": 0, 00:11:57.030 "data_size": 63488 00:11:57.030 }, 00:11:57.030 { 00:11:57.030 "name": "BaseBdev2", 00:11:57.030 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:57.030 "is_configured": true, 00:11:57.030 "data_offset": 2048, 00:11:57.030 "data_size": 63488 00:11:57.030 } 00:11:57.030 ] 00:11:57.030 }' 00:11:57.030 02:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.030 02:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.599 02:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:57.599 02:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.599 02:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.599 [2024-12-07 02:45:08.451028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:57.599 [2024-12-07 02:45:08.451304] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:57.599 [2024-12-07 02:45:08.451324] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:57.599 [2024-12-07 02:45:08.451368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:57.599 [2024-12-07 02:45:08.458468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1a10 00:11:57.599 02:45:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.599 02:45:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:57.599 [2024-12-07 02:45:08.460778] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:58.557 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.557 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.557 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.557 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.557 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.557 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.557 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.557 02:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.557 02:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.557 02:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.557 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.557 "name": "raid_bdev1", 00:11:58.557 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:58.557 "strip_size_kb": 0, 00:11:58.557 "state": "online", 00:11:58.557 "raid_level": "raid1", 00:11:58.557 "superblock": true, 00:11:58.557 "num_base_bdevs": 2, 00:11:58.558 "num_base_bdevs_discovered": 2, 00:11:58.558 "num_base_bdevs_operational": 2, 00:11:58.558 "process": { 00:11:58.558 "type": "rebuild", 00:11:58.558 "target": "spare", 00:11:58.558 "progress": { 00:11:58.558 "blocks": 20480, 00:11:58.558 "percent": 32 00:11:58.558 } 00:11:58.558 }, 00:11:58.558 "base_bdevs_list": [ 00:11:58.558 { 00:11:58.558 "name": "spare", 00:11:58.558 "uuid": "a087439f-e461-5e76-98a7-4cdc78ec1566", 00:11:58.558 "is_configured": true, 00:11:58.558 "data_offset": 2048, 00:11:58.558 "data_size": 63488 00:11:58.558 }, 00:11:58.558 { 00:11:58.558 "name": "BaseBdev2", 00:11:58.558 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:58.558 "is_configured": true, 00:11:58.558 "data_offset": 2048, 00:11:58.558 "data_size": 63488 00:11:58.558 } 00:11:58.558 ] 00:11:58.558 }' 00:11:58.558 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.558 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.558 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.558 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.558 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:58.558 02:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.558 02:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.558 [2024-12-07 02:45:09.604666] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.818 [2024-12-07 02:45:09.668163] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:58.818 [2024-12-07 02:45:09.668220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.818 [2024-12-07 02:45:09.668239] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.818 [2024-12-07 02:45:09.668247] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.818 "name": "raid_bdev1", 00:11:58.818 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:11:58.818 "strip_size_kb": 0, 00:11:58.818 "state": "online", 00:11:58.818 "raid_level": "raid1", 00:11:58.818 "superblock": true, 00:11:58.818 "num_base_bdevs": 2, 00:11:58.818 "num_base_bdevs_discovered": 1, 00:11:58.818 "num_base_bdevs_operational": 1, 00:11:58.818 "base_bdevs_list": [ 00:11:58.818 { 00:11:58.818 "name": null, 00:11:58.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.818 "is_configured": false, 00:11:58.818 "data_offset": 0, 00:11:58.818 "data_size": 63488 00:11:58.818 }, 00:11:58.818 { 00:11:58.818 "name": "BaseBdev2", 00:11:58.818 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:11:58.818 "is_configured": true, 00:11:58.818 "data_offset": 2048, 00:11:58.818 "data_size": 63488 00:11:58.818 } 00:11:58.818 ] 00:11:58.818 }' 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.818 02:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.387 02:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:59.387 02:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.387 02:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.387 [2024-12-07 02:45:10.158669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:59.387 [2024-12-07 02:45:10.158733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.388 [2024-12-07 02:45:10.158758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:59.388 [2024-12-07 02:45:10.158768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.388 [2024-12-07 02:45:10.159272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.388 [2024-12-07 02:45:10.159290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:59.388 [2024-12-07 02:45:10.159378] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:59.388 [2024-12-07 02:45:10.159390] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:59.388 [2024-12-07 02:45:10.159412] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:59.388 [2024-12-07 02:45:10.159468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:59.388 [2024-12-07 02:45:10.166232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:11:59.388 spare 00:11:59.388 02:45:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.388 02:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:59.388 [2024-12-07 02:45:10.168415] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:00.328 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.329 "name": "raid_bdev1", 00:12:00.329 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:12:00.329 "strip_size_kb": 0, 00:12:00.329 "state": "online", 00:12:00.329 "raid_level": "raid1", 00:12:00.329 "superblock": true, 00:12:00.329 "num_base_bdevs": 2, 00:12:00.329 "num_base_bdevs_discovered": 2, 00:12:00.329 "num_base_bdevs_operational": 2, 00:12:00.329 "process": { 00:12:00.329 "type": "rebuild", 00:12:00.329 "target": "spare", 00:12:00.329 "progress": { 00:12:00.329 "blocks": 20480, 00:12:00.329 "percent": 32 00:12:00.329 } 00:12:00.329 }, 00:12:00.329 "base_bdevs_list": [ 00:12:00.329 { 00:12:00.329 "name": "spare", 00:12:00.329 "uuid": "a087439f-e461-5e76-98a7-4cdc78ec1566", 00:12:00.329 "is_configured": true, 00:12:00.329 "data_offset": 2048, 00:12:00.329 "data_size": 63488 00:12:00.329 }, 00:12:00.329 { 00:12:00.329 "name": "BaseBdev2", 00:12:00.329 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:12:00.329 "is_configured": true, 00:12:00.329 "data_offset": 2048, 00:12:00.329 "data_size": 63488 00:12:00.329 } 00:12:00.329 ] 00:12:00.329 }' 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.329 [2024-12-07 02:45:11.324370] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:00.329 [2024-12-07 02:45:11.375993] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:00.329 [2024-12-07 02:45:11.376061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.329 [2024-12-07 02:45:11.376078] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:00.329 [2024-12-07 02:45:11.376088] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.329 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.597 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.597 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.597 "name": "raid_bdev1", 00:12:00.597 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:12:00.597 "strip_size_kb": 0, 00:12:00.597 "state": "online", 00:12:00.597 "raid_level": "raid1", 00:12:00.597 "superblock": true, 00:12:00.597 "num_base_bdevs": 2, 00:12:00.597 "num_base_bdevs_discovered": 1, 00:12:00.597 "num_base_bdevs_operational": 1, 00:12:00.597 "base_bdevs_list": [ 00:12:00.597 { 00:12:00.597 "name": null, 00:12:00.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.597 "is_configured": false, 00:12:00.597 "data_offset": 0, 00:12:00.597 "data_size": 63488 00:12:00.597 }, 00:12:00.597 { 00:12:00.597 "name": "BaseBdev2", 00:12:00.597 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:12:00.597 "is_configured": true, 00:12:00.597 "data_offset": 2048, 00:12:00.597 "data_size": 63488 00:12:00.597 } 00:12:00.597 ] 00:12:00.597 }' 00:12:00.597 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.597 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.858 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:00.858 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.858 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:00.858 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:00.858 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.858 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.858 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.858 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.858 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.858 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.859 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.859 "name": "raid_bdev1", 00:12:00.859 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:12:00.859 "strip_size_kb": 0, 00:12:00.859 "state": "online", 00:12:00.859 "raid_level": "raid1", 00:12:00.859 "superblock": true, 00:12:00.859 "num_base_bdevs": 2, 00:12:00.859 "num_base_bdevs_discovered": 1, 00:12:00.859 "num_base_bdevs_operational": 1, 00:12:00.859 "base_bdevs_list": [ 00:12:00.859 { 00:12:00.859 "name": null, 00:12:00.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.859 "is_configured": false, 00:12:00.859 "data_offset": 0, 00:12:00.859 "data_size": 63488 00:12:00.859 }, 00:12:00.859 { 00:12:00.859 "name": "BaseBdev2", 00:12:00.859 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:12:00.859 "is_configured": true, 00:12:00.859 "data_offset": 2048, 00:12:00.859 "data_size": 63488 00:12:00.859 } 00:12:00.859 ] 00:12:00.859 }' 00:12:00.859 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.859 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:00.859 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.118 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:01.118 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:01.118 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.118 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.118 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.118 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:01.118 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.118 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.118 [2024-12-07 02:45:11.990331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:01.118 [2024-12-07 02:45:11.990391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.118 [2024-12-07 02:45:11.990412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:01.118 [2024-12-07 02:45:11.990424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.118 [2024-12-07 02:45:11.990914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.118 [2024-12-07 02:45:11.990957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:01.118 [2024-12-07 02:45:11.991032] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:01.118 [2024-12-07 02:45:11.991059] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:01.118 [2024-12-07 02:45:11.991068] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:01.118 [2024-12-07 02:45:11.991082] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:01.118 BaseBdev1 00:12:01.118 02:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.118 02:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.056 "name": "raid_bdev1", 00:12:02.056 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:12:02.056 "strip_size_kb": 0, 00:12:02.056 "state": "online", 00:12:02.056 "raid_level": "raid1", 00:12:02.056 "superblock": true, 00:12:02.056 "num_base_bdevs": 2, 00:12:02.056 "num_base_bdevs_discovered": 1, 00:12:02.056 "num_base_bdevs_operational": 1, 00:12:02.056 "base_bdevs_list": [ 00:12:02.056 { 00:12:02.056 "name": null, 00:12:02.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.056 "is_configured": false, 00:12:02.056 "data_offset": 0, 00:12:02.056 "data_size": 63488 00:12:02.056 }, 00:12:02.056 { 00:12:02.056 "name": "BaseBdev2", 00:12:02.056 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:12:02.056 "is_configured": true, 00:12:02.056 "data_offset": 2048, 00:12:02.056 "data_size": 63488 00:12:02.056 } 00:12:02.056 ] 00:12:02.056 }' 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.056 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.626 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:02.626 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.626 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:02.626 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:02.626 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.626 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.626 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.626 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.626 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.626 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.626 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.626 "name": "raid_bdev1", 00:12:02.626 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:12:02.626 "strip_size_kb": 0, 00:12:02.626 "state": "online", 00:12:02.626 "raid_level": "raid1", 00:12:02.626 "superblock": true, 00:12:02.626 "num_base_bdevs": 2, 00:12:02.626 "num_base_bdevs_discovered": 1, 00:12:02.626 "num_base_bdevs_operational": 1, 00:12:02.626 "base_bdevs_list": [ 00:12:02.626 { 00:12:02.626 "name": null, 00:12:02.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.626 "is_configured": false, 00:12:02.626 "data_offset": 0, 00:12:02.626 "data_size": 63488 00:12:02.626 }, 00:12:02.626 { 00:12:02.626 "name": "BaseBdev2", 00:12:02.626 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:12:02.626 "is_configured": true, 00:12:02.626 "data_offset": 2048, 00:12:02.626 "data_size": 63488 00:12:02.626 } 00:12:02.626 ] 00:12:02.626 }' 00:12:02.626 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.626 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.627 [2024-12-07 02:45:13.611594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.627 [2024-12-07 02:45:13.611794] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:02.627 [2024-12-07 02:45:13.611815] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:02.627 request: 00:12:02.627 { 00:12:02.627 "base_bdev": "BaseBdev1", 00:12:02.627 "raid_bdev": "raid_bdev1", 00:12:02.627 "method": "bdev_raid_add_base_bdev", 00:12:02.627 "req_id": 1 00:12:02.627 } 00:12:02.627 Got JSON-RPC error response 00:12:02.627 response: 00:12:02.627 { 00:12:02.627 "code": -22, 00:12:02.627 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:02.627 } 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:02.627 02:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.567 02:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.827 02:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.827 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.827 "name": "raid_bdev1", 00:12:03.827 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:12:03.827 "strip_size_kb": 0, 00:12:03.827 "state": "online", 00:12:03.827 "raid_level": "raid1", 00:12:03.827 "superblock": true, 00:12:03.827 "num_base_bdevs": 2, 00:12:03.827 "num_base_bdevs_discovered": 1, 00:12:03.827 "num_base_bdevs_operational": 1, 00:12:03.827 "base_bdevs_list": [ 00:12:03.827 { 00:12:03.827 "name": null, 00:12:03.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.827 "is_configured": false, 00:12:03.827 "data_offset": 0, 00:12:03.827 "data_size": 63488 00:12:03.827 }, 00:12:03.827 { 00:12:03.828 "name": "BaseBdev2", 00:12:03.828 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:12:03.828 "is_configured": true, 00:12:03.828 "data_offset": 2048, 00:12:03.828 "data_size": 63488 00:12:03.828 } 00:12:03.828 ] 00:12:03.828 }' 00:12:03.828 02:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.828 02:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.087 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:04.087 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.087 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:04.087 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:04.087 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.087 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.087 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.087 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.087 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.087 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.087 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.087 "name": "raid_bdev1", 00:12:04.087 "uuid": "29d6766f-08fd-41f3-a6ac-c4cfc0b88067", 00:12:04.087 "strip_size_kb": 0, 00:12:04.087 "state": "online", 00:12:04.087 "raid_level": "raid1", 00:12:04.087 "superblock": true, 00:12:04.087 "num_base_bdevs": 2, 00:12:04.087 "num_base_bdevs_discovered": 1, 00:12:04.087 "num_base_bdevs_operational": 1, 00:12:04.087 "base_bdevs_list": [ 00:12:04.087 { 00:12:04.087 "name": null, 00:12:04.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.087 "is_configured": false, 00:12:04.087 "data_offset": 0, 00:12:04.087 "data_size": 63488 00:12:04.087 }, 00:12:04.087 { 00:12:04.087 "name": "BaseBdev2", 00:12:04.087 "uuid": "33a03728-e44e-58de-82c1-e3752f1d5951", 00:12:04.087 "is_configured": true, 00:12:04.087 "data_offset": 2048, 00:12:04.087 "data_size": 63488 00:12:04.087 } 00:12:04.087 ] 00:12:04.087 }' 00:12:04.087 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86665 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86665 ']' 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86665 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86665 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86665' 00:12:04.347 killing process with pid 86665 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86665 00:12:04.347 Received shutdown signal, test time was about 60.000000 seconds 00:12:04.347 00:12:04.347 Latency(us) 00:12:04.347 [2024-12-07T02:45:15.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.347 [2024-12-07T02:45:15.425Z] =================================================================================================================== 00:12:04.347 [2024-12-07T02:45:15.425Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:04.347 [2024-12-07 02:45:15.266170] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:04.347 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86665 00:12:04.347 [2024-12-07 02:45:15.266343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:04.347 [2024-12-07 02:45:15.266415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:04.347 [2024-12-07 02:45:15.266424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:04.347 [2024-12-07 02:45:15.324596] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:04.916 00:12:04.916 real 0m22.212s 00:12:04.916 user 0m27.283s 00:12:04.916 sys 0m3.942s 00:12:04.916 ************************************ 00:12:04.916 END TEST raid_rebuild_test_sb 00:12:04.916 ************************************ 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.916 02:45:15 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:04.916 02:45:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:04.916 02:45:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.916 02:45:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.916 ************************************ 00:12:04.916 START TEST raid_rebuild_test_io 00:12:04.916 ************************************ 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87390 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87390 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 87390 ']' 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:04.916 02:45:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:04.916 [2024-12-07 02:45:15.885500] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:04.916 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:04.916 Zero copy mechanism will not be used. 00:12:04.916 [2024-12-07 02:45:15.885734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87390 ] 00:12:05.187 [2024-12-07 02:45:16.052870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.187 [2024-12-07 02:45:16.128789] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.187 [2024-12-07 02:45:16.206908] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.187 [2024-12-07 02:45:16.206971] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.757 BaseBdev1_malloc 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.757 [2024-12-07 02:45:16.746527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:05.757 [2024-12-07 02:45:16.746600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.757 [2024-12-07 02:45:16.746631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:05.757 [2024-12-07 02:45:16.746646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.757 [2024-12-07 02:45:16.749163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.757 [2024-12-07 02:45:16.749200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:05.757 BaseBdev1 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.757 BaseBdev2_malloc 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.757 [2024-12-07 02:45:16.796633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:05.757 [2024-12-07 02:45:16.796817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.757 [2024-12-07 02:45:16.796869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:05.757 [2024-12-07 02:45:16.796888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.757 [2024-12-07 02:45:16.801417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.757 [2024-12-07 02:45:16.801477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:05.757 BaseBdev2 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:05.757 spare_malloc 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.757 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.018 spare_delay 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.018 [2024-12-07 02:45:16.845727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:06.018 [2024-12-07 02:45:16.845778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.018 [2024-12-07 02:45:16.845801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:06.018 [2024-12-07 02:45:16.845810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.018 [2024-12-07 02:45:16.848284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.018 [2024-12-07 02:45:16.848378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:06.018 spare 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.018 [2024-12-07 02:45:16.857751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.018 [2024-12-07 02:45:16.860001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.018 [2024-12-07 02:45:16.860087] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:06.018 [2024-12-07 02:45:16.860099] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:06.018 [2024-12-07 02:45:16.860352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:06.018 [2024-12-07 02:45:16.860481] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:06.018 [2024-12-07 02:45:16.860495] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:06.018 [2024-12-07 02:45:16.860660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.018 "name": "raid_bdev1", 00:12:06.018 "uuid": "dcb464e9-f46c-4dbc-8553-5763c3b2115b", 00:12:06.018 "strip_size_kb": 0, 00:12:06.018 "state": "online", 00:12:06.018 "raid_level": "raid1", 00:12:06.018 "superblock": false, 00:12:06.018 "num_base_bdevs": 2, 00:12:06.018 "num_base_bdevs_discovered": 2, 00:12:06.018 "num_base_bdevs_operational": 2, 00:12:06.018 "base_bdevs_list": [ 00:12:06.018 { 00:12:06.018 "name": "BaseBdev1", 00:12:06.018 "uuid": "38de0f20-3566-5a4d-8edf-a884cbda7e23", 00:12:06.018 "is_configured": true, 00:12:06.018 "data_offset": 0, 00:12:06.018 "data_size": 65536 00:12:06.018 }, 00:12:06.018 { 00:12:06.018 "name": "BaseBdev2", 00:12:06.018 "uuid": "2f99588f-6f36-5018-98ee-7270c29ff8c6", 00:12:06.018 "is_configured": true, 00:12:06.018 "data_offset": 0, 00:12:06.018 "data_size": 65536 00:12:06.018 } 00:12:06.018 ] 00:12:06.018 }' 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.018 02:45:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.278 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.278 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.278 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.278 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:06.278 [2024-12-07 02:45:17.353252] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.537 [2024-12-07 02:45:17.456756] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.537 "name": "raid_bdev1", 00:12:06.537 "uuid": "dcb464e9-f46c-4dbc-8553-5763c3b2115b", 00:12:06.537 "strip_size_kb": 0, 00:12:06.537 "state": "online", 00:12:06.537 "raid_level": "raid1", 00:12:06.537 "superblock": false, 00:12:06.537 "num_base_bdevs": 2, 00:12:06.537 "num_base_bdevs_discovered": 1, 00:12:06.537 "num_base_bdevs_operational": 1, 00:12:06.537 "base_bdevs_list": [ 00:12:06.537 { 00:12:06.537 "name": null, 00:12:06.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.537 "is_configured": false, 00:12:06.537 "data_offset": 0, 00:12:06.537 "data_size": 65536 00:12:06.537 }, 00:12:06.537 { 00:12:06.537 "name": "BaseBdev2", 00:12:06.537 "uuid": "2f99588f-6f36-5018-98ee-7270c29ff8c6", 00:12:06.537 "is_configured": true, 00:12:06.537 "data_offset": 0, 00:12:06.537 "data_size": 65536 00:12:06.537 } 00:12:06.537 ] 00:12:06.537 }' 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.537 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:06.537 [2024-12-07 02:45:17.548126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:06.537 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:06.537 Zero copy mechanism will not be used. 00:12:06.537 Running I/O for 60 seconds... 00:12:07.105 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:07.105 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.105 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:07.105 [2024-12-07 02:45:17.945828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:07.105 02:45:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.105 02:45:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:07.105 [2024-12-07 02:45:17.988991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:07.105 [2024-12-07 02:45:17.991361] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:07.105 [2024-12-07 02:45:18.106007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:07.105 [2024-12-07 02:45:18.106836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:07.365 [2024-12-07 02:45:18.320861] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:07.365 [2024-12-07 02:45:18.321424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:07.885 189.00 IOPS, 567.00 MiB/s [2024-12-07T02:45:18.963Z] [2024-12-07 02:45:18.749052] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:08.146 02:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.146 02:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.146 02:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.146 02:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.146 02:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.146 02:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.146 02:45:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.146 02:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.146 02:45:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.146 [2024-12-07 02:45:18.993011] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:08.146 [2024-12-07 02:45:18.993414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:08.146 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.146 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.146 "name": "raid_bdev1", 00:12:08.146 "uuid": "dcb464e9-f46c-4dbc-8553-5763c3b2115b", 00:12:08.146 "strip_size_kb": 0, 00:12:08.146 "state": "online", 00:12:08.146 "raid_level": "raid1", 00:12:08.146 "superblock": false, 00:12:08.146 "num_base_bdevs": 2, 00:12:08.146 "num_base_bdevs_discovered": 2, 00:12:08.146 "num_base_bdevs_operational": 2, 00:12:08.146 "process": { 00:12:08.146 "type": "rebuild", 00:12:08.146 "target": "spare", 00:12:08.146 "progress": { 00:12:08.146 "blocks": 12288, 00:12:08.146 "percent": 18 00:12:08.146 } 00:12:08.146 }, 00:12:08.146 "base_bdevs_list": [ 00:12:08.146 { 00:12:08.146 "name": "spare", 00:12:08.146 "uuid": "d62ba7f7-5c2c-5ff9-8762-9e7ca9c98153", 00:12:08.146 "is_configured": true, 00:12:08.146 "data_offset": 0, 00:12:08.146 "data_size": 65536 00:12:08.146 }, 00:12:08.146 { 00:12:08.146 "name": "BaseBdev2", 00:12:08.146 "uuid": "2f99588f-6f36-5018-98ee-7270c29ff8c6", 00:12:08.146 "is_configured": true, 00:12:08.146 "data_offset": 0, 00:12:08.146 "data_size": 65536 00:12:08.146 } 00:12:08.146 ] 00:12:08.146 }' 00:12:08.146 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.146 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.146 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.146 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.146 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:08.146 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.146 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.146 [2024-12-07 02:45:19.120398] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.406 [2024-12-07 02:45:19.277477] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:08.406 [2024-12-07 02:45:19.285733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.406 [2024-12-07 02:45:19.285775] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:08.406 [2024-12-07 02:45:19.285789] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:08.406 [2024-12-07 02:45:19.300695] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.406 "name": "raid_bdev1", 00:12:08.406 "uuid": "dcb464e9-f46c-4dbc-8553-5763c3b2115b", 00:12:08.406 "strip_size_kb": 0, 00:12:08.406 "state": "online", 00:12:08.406 "raid_level": "raid1", 00:12:08.406 "superblock": false, 00:12:08.406 "num_base_bdevs": 2, 00:12:08.406 "num_base_bdevs_discovered": 1, 00:12:08.406 "num_base_bdevs_operational": 1, 00:12:08.406 "base_bdevs_list": [ 00:12:08.406 { 00:12:08.406 "name": null, 00:12:08.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.406 "is_configured": false, 00:12:08.406 "data_offset": 0, 00:12:08.406 "data_size": 65536 00:12:08.406 }, 00:12:08.406 { 00:12:08.406 "name": "BaseBdev2", 00:12:08.406 "uuid": "2f99588f-6f36-5018-98ee-7270c29ff8c6", 00:12:08.406 "is_configured": true, 00:12:08.406 "data_offset": 0, 00:12:08.406 "data_size": 65536 00:12:08.406 } 00:12:08.406 ] 00:12:08.406 }' 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.406 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.927 157.00 IOPS, 471.00 MiB/s [2024-12-07T02:45:20.005Z] 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.927 "name": "raid_bdev1", 00:12:08.927 "uuid": "dcb464e9-f46c-4dbc-8553-5763c3b2115b", 00:12:08.927 "strip_size_kb": 0, 00:12:08.927 "state": "online", 00:12:08.927 "raid_level": "raid1", 00:12:08.927 "superblock": false, 00:12:08.927 "num_base_bdevs": 2, 00:12:08.927 "num_base_bdevs_discovered": 1, 00:12:08.927 "num_base_bdevs_operational": 1, 00:12:08.927 "base_bdevs_list": [ 00:12:08.927 { 00:12:08.927 "name": null, 00:12:08.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.927 "is_configured": false, 00:12:08.927 "data_offset": 0, 00:12:08.927 "data_size": 65536 00:12:08.927 }, 00:12:08.927 { 00:12:08.927 "name": "BaseBdev2", 00:12:08.927 "uuid": "2f99588f-6f36-5018-98ee-7270c29ff8c6", 00:12:08.927 "is_configured": true, 00:12:08.927 "data_offset": 0, 00:12:08.927 "data_size": 65536 00:12:08.927 } 00:12:08.927 ] 00:12:08.927 }' 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.927 [2024-12-07 02:45:19.926002] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.927 02:45:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:08.927 [2024-12-07 02:45:19.995665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:08.927 [2024-12-07 02:45:19.997954] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:09.187 [2024-12-07 02:45:20.117894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:09.187 [2024-12-07 02:45:20.118396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:09.187 [2024-12-07 02:45:20.239504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:09.187 [2024-12-07 02:45:20.240090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:09.447 [2024-12-07 02:45:20.492311] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:09.447 [2024-12-07 02:45:20.493121] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:09.707 174.00 IOPS, 522.00 MiB/s [2024-12-07T02:45:20.785Z] [2024-12-07 02:45:20.706663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:09.967 02:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:09.967 02:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.967 02:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:09.967 02:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:09.967 02:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.967 02:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.967 02:45:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.967 02:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.967 02:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.967 02:45:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.967 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.967 "name": "raid_bdev1", 00:12:09.967 "uuid": "dcb464e9-f46c-4dbc-8553-5763c3b2115b", 00:12:09.967 "strip_size_kb": 0, 00:12:09.967 "state": "online", 00:12:09.967 "raid_level": "raid1", 00:12:09.967 "superblock": false, 00:12:09.967 "num_base_bdevs": 2, 00:12:09.967 "num_base_bdevs_discovered": 2, 00:12:09.967 "num_base_bdevs_operational": 2, 00:12:09.967 "process": { 00:12:09.967 "type": "rebuild", 00:12:09.967 "target": "spare", 00:12:09.967 "progress": { 00:12:09.967 "blocks": 12288, 00:12:09.967 "percent": 18 00:12:09.967 } 00:12:09.967 }, 00:12:09.967 "base_bdevs_list": [ 00:12:09.967 { 00:12:09.967 "name": "spare", 00:12:09.967 "uuid": "d62ba7f7-5c2c-5ff9-8762-9e7ca9c98153", 00:12:09.967 "is_configured": true, 00:12:09.967 "data_offset": 0, 00:12:09.967 "data_size": 65536 00:12:09.967 }, 00:12:09.967 { 00:12:09.967 "name": "BaseBdev2", 00:12:09.967 "uuid": "2f99588f-6f36-5018-98ee-7270c29ff8c6", 00:12:09.967 "is_configured": true, 00:12:09.967 "data_offset": 0, 00:12:09.967 "data_size": 65536 00:12:09.967 } 00:12:09.967 ] 00:12:09.967 }' 00:12:09.967 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.227 [2024-12-07 02:45:21.050743] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=334 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.227 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.228 "name": "raid_bdev1", 00:12:10.228 "uuid": "dcb464e9-f46c-4dbc-8553-5763c3b2115b", 00:12:10.228 "strip_size_kb": 0, 00:12:10.228 "state": "online", 00:12:10.228 "raid_level": "raid1", 00:12:10.228 "superblock": false, 00:12:10.228 "num_base_bdevs": 2, 00:12:10.228 "num_base_bdevs_discovered": 2, 00:12:10.228 "num_base_bdevs_operational": 2, 00:12:10.228 "process": { 00:12:10.228 "type": "rebuild", 00:12:10.228 "target": "spare", 00:12:10.228 "progress": { 00:12:10.228 "blocks": 14336, 00:12:10.228 "percent": 21 00:12:10.228 } 00:12:10.228 }, 00:12:10.228 "base_bdevs_list": [ 00:12:10.228 { 00:12:10.228 "name": "spare", 00:12:10.228 "uuid": "d62ba7f7-5c2c-5ff9-8762-9e7ca9c98153", 00:12:10.228 "is_configured": true, 00:12:10.228 "data_offset": 0, 00:12:10.228 "data_size": 65536 00:12:10.228 }, 00:12:10.228 { 00:12:10.228 "name": "BaseBdev2", 00:12:10.228 "uuid": "2f99588f-6f36-5018-98ee-7270c29ff8c6", 00:12:10.228 "is_configured": true, 00:12:10.228 "data_offset": 0, 00:12:10.228 "data_size": 65536 00:12:10.228 } 00:12:10.228 ] 00:12:10.228 }' 00:12:10.228 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.228 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:10.228 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.228 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:10.228 02:45:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:10.486 [2024-12-07 02:45:21.380990] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:10.486 [2024-12-07 02:45:21.381806] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:10.746 149.25 IOPS, 447.75 MiB/s [2024-12-07T02:45:21.824Z] [2024-12-07 02:45:21.583832] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:10.746 [2024-12-07 02:45:21.584203] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:11.315 [2024-12-07 02:45:22.107953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:11.315 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:11.315 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.315 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.315 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.315 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.316 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.316 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.316 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.316 02:45:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.316 02:45:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.316 02:45:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.316 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.316 "name": "raid_bdev1", 00:12:11.316 "uuid": "dcb464e9-f46c-4dbc-8553-5763c3b2115b", 00:12:11.316 "strip_size_kb": 0, 00:12:11.316 "state": "online", 00:12:11.316 "raid_level": "raid1", 00:12:11.316 "superblock": false, 00:12:11.316 "num_base_bdevs": 2, 00:12:11.316 "num_base_bdevs_discovered": 2, 00:12:11.316 "num_base_bdevs_operational": 2, 00:12:11.316 "process": { 00:12:11.316 "type": "rebuild", 00:12:11.316 "target": "spare", 00:12:11.316 "progress": { 00:12:11.316 "blocks": 32768, 00:12:11.316 "percent": 50 00:12:11.316 } 00:12:11.316 }, 00:12:11.316 "base_bdevs_list": [ 00:12:11.316 { 00:12:11.316 "name": "spare", 00:12:11.316 "uuid": "d62ba7f7-5c2c-5ff9-8762-9e7ca9c98153", 00:12:11.316 "is_configured": true, 00:12:11.316 "data_offset": 0, 00:12:11.316 "data_size": 65536 00:12:11.316 }, 00:12:11.316 { 00:12:11.316 "name": "BaseBdev2", 00:12:11.316 "uuid": "2f99588f-6f36-5018-98ee-7270c29ff8c6", 00:12:11.316 "is_configured": true, 00:12:11.316 "data_offset": 0, 00:12:11.316 "data_size": 65536 00:12:11.316 } 00:12:11.316 ] 00:12:11.316 }' 00:12:11.316 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.316 [2024-12-07 02:45:22.314932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:11.316 [2024-12-07 02:45:22.315245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:11.316 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.316 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.579 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.579 02:45:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:11.879 130.80 IOPS, 392.40 MiB/s [2024-12-07T02:45:22.957Z] [2024-12-07 02:45:22.750419] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:12.165 [2024-12-07 02:45:23.080320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:12.434 [2024-12-07 02:45:23.289307] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.434 "name": "raid_bdev1", 00:12:12.434 "uuid": "dcb464e9-f46c-4dbc-8553-5763c3b2115b", 00:12:12.434 "strip_size_kb": 0, 00:12:12.434 "state": "online", 00:12:12.434 "raid_level": "raid1", 00:12:12.434 "superblock": false, 00:12:12.434 "num_base_bdevs": 2, 00:12:12.434 "num_base_bdevs_discovered": 2, 00:12:12.434 "num_base_bdevs_operational": 2, 00:12:12.434 "process": { 00:12:12.434 "type": "rebuild", 00:12:12.434 "target": "spare", 00:12:12.434 "progress": { 00:12:12.434 "blocks": 49152, 00:12:12.434 "percent": 75 00:12:12.434 } 00:12:12.434 }, 00:12:12.434 "base_bdevs_list": [ 00:12:12.434 { 00:12:12.434 "name": "spare", 00:12:12.434 "uuid": "d62ba7f7-5c2c-5ff9-8762-9e7ca9c98153", 00:12:12.434 "is_configured": true, 00:12:12.434 "data_offset": 0, 00:12:12.434 "data_size": 65536 00:12:12.434 }, 00:12:12.434 { 00:12:12.434 "name": "BaseBdev2", 00:12:12.434 "uuid": "2f99588f-6f36-5018-98ee-7270c29ff8c6", 00:12:12.434 "is_configured": true, 00:12:12.434 "data_offset": 0, 00:12:12.434 "data_size": 65536 00:12:12.434 } 00:12:12.434 ] 00:12:12.434 }' 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:12.434 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.694 115.33 IOPS, 346.00 MiB/s [2024-12-07T02:45:23.772Z] 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.694 02:45:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:13.265 [2024-12-07 02:45:24.254140] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:13.525 [2024-12-07 02:45:24.354017] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:13.525 [2024-12-07 02:45:24.356571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.525 103.14 IOPS, 309.43 MiB/s [2024-12-07T02:45:24.603Z] 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:13.525 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.525 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.525 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.525 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.525 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.525 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.525 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.525 02:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.525 02:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.525 02:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.785 "name": "raid_bdev1", 00:12:13.785 "uuid": "dcb464e9-f46c-4dbc-8553-5763c3b2115b", 00:12:13.785 "strip_size_kb": 0, 00:12:13.785 "state": "online", 00:12:13.785 "raid_level": "raid1", 00:12:13.785 "superblock": false, 00:12:13.785 "num_base_bdevs": 2, 00:12:13.785 "num_base_bdevs_discovered": 2, 00:12:13.785 "num_base_bdevs_operational": 2, 00:12:13.785 "base_bdevs_list": [ 00:12:13.785 { 00:12:13.785 "name": "spare", 00:12:13.785 "uuid": "d62ba7f7-5c2c-5ff9-8762-9e7ca9c98153", 00:12:13.785 "is_configured": true, 00:12:13.785 "data_offset": 0, 00:12:13.785 "data_size": 65536 00:12:13.785 }, 00:12:13.785 { 00:12:13.785 "name": "BaseBdev2", 00:12:13.785 "uuid": "2f99588f-6f36-5018-98ee-7270c29ff8c6", 00:12:13.785 "is_configured": true, 00:12:13.785 "data_offset": 0, 00:12:13.785 "data_size": 65536 00:12:13.785 } 00:12:13.785 ] 00:12:13.785 }' 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.785 "name": "raid_bdev1", 00:12:13.785 "uuid": "dcb464e9-f46c-4dbc-8553-5763c3b2115b", 00:12:13.785 "strip_size_kb": 0, 00:12:13.785 "state": "online", 00:12:13.785 "raid_level": "raid1", 00:12:13.785 "superblock": false, 00:12:13.785 "num_base_bdevs": 2, 00:12:13.785 "num_base_bdevs_discovered": 2, 00:12:13.785 "num_base_bdevs_operational": 2, 00:12:13.785 "base_bdevs_list": [ 00:12:13.785 { 00:12:13.785 "name": "spare", 00:12:13.785 "uuid": "d62ba7f7-5c2c-5ff9-8762-9e7ca9c98153", 00:12:13.785 "is_configured": true, 00:12:13.785 "data_offset": 0, 00:12:13.785 "data_size": 65536 00:12:13.785 }, 00:12:13.785 { 00:12:13.785 "name": "BaseBdev2", 00:12:13.785 "uuid": "2f99588f-6f36-5018-98ee-7270c29ff8c6", 00:12:13.785 "is_configured": true, 00:12:13.785 "data_offset": 0, 00:12:13.785 "data_size": 65536 00:12:13.785 } 00:12:13.785 ] 00:12:13.785 }' 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:13.785 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.044 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.044 "name": "raid_bdev1", 00:12:14.044 "uuid": "dcb464e9-f46c-4dbc-8553-5763c3b2115b", 00:12:14.044 "strip_size_kb": 0, 00:12:14.044 "state": "online", 00:12:14.044 "raid_level": "raid1", 00:12:14.044 "superblock": false, 00:12:14.044 "num_base_bdevs": 2, 00:12:14.045 "num_base_bdevs_discovered": 2, 00:12:14.045 "num_base_bdevs_operational": 2, 00:12:14.045 "base_bdevs_list": [ 00:12:14.045 { 00:12:14.045 "name": "spare", 00:12:14.045 "uuid": "d62ba7f7-5c2c-5ff9-8762-9e7ca9c98153", 00:12:14.045 "is_configured": true, 00:12:14.045 "data_offset": 0, 00:12:14.045 "data_size": 65536 00:12:14.045 }, 00:12:14.045 { 00:12:14.045 "name": "BaseBdev2", 00:12:14.045 "uuid": "2f99588f-6f36-5018-98ee-7270c29ff8c6", 00:12:14.045 "is_configured": true, 00:12:14.045 "data_offset": 0, 00:12:14.045 "data_size": 65536 00:12:14.045 } 00:12:14.045 ] 00:12:14.045 }' 00:12:14.045 02:45:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.045 02:45:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.303 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:14.303 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.303 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.303 [2024-12-07 02:45:25.342832] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:14.303 [2024-12-07 02:45:25.342928] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.562 00:12:14.562 Latency(us) 00:12:14.562 [2024-12-07T02:45:25.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.562 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:14.562 raid_bdev1 : 7.90 95.12 285.36 0.00 0.00 13880.96 277.24 108520.75 00:12:14.562 [2024-12-07T02:45:25.640Z] =================================================================================================================== 00:12:14.562 [2024-12-07T02:45:25.640Z] Total : 95.12 285.36 0.00 0.00 13880.96 277.24 108520.75 00:12:14.562 { 00:12:14.562 "results": [ 00:12:14.562 { 00:12:14.562 "job": "raid_bdev1", 00:12:14.562 "core_mask": "0x1", 00:12:14.562 "workload": "randrw", 00:12:14.562 "percentage": 50, 00:12:14.562 "status": "finished", 00:12:14.562 "queue_depth": 2, 00:12:14.562 "io_size": 3145728, 00:12:14.562 "runtime": 7.895237, 00:12:14.562 "iops": 95.12064045702492, 00:12:14.562 "mibps": 285.36192137107474, 00:12:14.562 "io_failed": 0, 00:12:14.562 "io_timeout": 0, 00:12:14.562 "avg_latency_us": 13880.964573581658, 00:12:14.562 "min_latency_us": 277.2401746724891, 00:12:14.562 "max_latency_us": 108520.74759825328 00:12:14.562 } 00:12:14.562 ], 00:12:14.562 "core_count": 1 00:12:14.562 } 00:12:14.562 [2024-12-07 02:45:25.434230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.562 [2024-12-07 02:45:25.434276] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.562 [2024-12-07 02:45:25.434357] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.562 [2024-12-07 02:45:25.434368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:14.562 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:14.563 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:14.563 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:14.822 /dev/nbd0 00:12:14.822 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:14.822 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:14.822 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:14.822 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:14.822 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:14.822 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:14.822 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:14.822 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:14.822 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:14.822 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:14.822 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:14.822 1+0 records in 00:12:14.822 1+0 records out 00:12:14.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373235 s, 11.0 MB/s 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:14.823 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:15.082 /dev/nbd1 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:15.082 1+0 records in 00:12:15.082 1+0 records out 00:12:15.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505403 s, 8.1 MB/s 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:15.082 02:45:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:15.082 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:15.082 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:15.082 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:15.082 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:15.082 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:15.082 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.082 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.343 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 87390 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 87390 ']' 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 87390 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87390 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87390' 00:12:15.603 killing process with pid 87390 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 87390 00:12:15.603 Received shutdown signal, test time was about 8.988458 seconds 00:12:15.603 00:12:15.603 Latency(us) 00:12:15.603 [2024-12-07T02:45:26.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.603 [2024-12-07T02:45:26.681Z] =================================================================================================================== 00:12:15.603 [2024-12-07T02:45:26.681Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:15.603 [2024-12-07 02:45:26.521762] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.603 02:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 87390 00:12:15.603 [2024-12-07 02:45:26.568492] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:15.863 02:45:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:15.863 00:12:15.863 real 0m11.152s 00:12:15.863 user 0m14.280s 00:12:15.863 sys 0m1.597s 00:12:15.863 02:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:15.863 ************************************ 00:12:15.863 END TEST raid_rebuild_test_io 00:12:15.863 ************************************ 00:12:15.863 02:45:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.122 02:45:26 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:16.122 02:45:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:16.122 02:45:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.122 02:45:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.122 ************************************ 00:12:16.122 START TEST raid_rebuild_test_sb_io 00:12:16.122 ************************************ 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:16.122 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87755 00:12:16.123 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:16.123 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87755 00:12:16.123 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87755 ']' 00:12:16.123 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.123 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.123 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.123 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.123 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.123 [2024-12-07 02:45:27.111607] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:16.123 [2024-12-07 02:45:27.111820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87755 ] 00:12:16.123 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:16.123 Zero copy mechanism will not be used. 00:12:16.381 [2024-12-07 02:45:27.276785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.381 [2024-12-07 02:45:27.345936] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.381 [2024-12-07 02:45:27.421301] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.382 [2024-12-07 02:45:27.421413] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.950 BaseBdev1_malloc 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.950 [2024-12-07 02:45:27.960463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:16.950 [2024-12-07 02:45:27.960542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.950 [2024-12-07 02:45:27.960577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:16.950 [2024-12-07 02:45:27.960619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.950 [2024-12-07 02:45:27.963023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.950 [2024-12-07 02:45:27.963060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:16.950 BaseBdev1 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.950 BaseBdev2_malloc 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.950 02:45:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.950 [2024-12-07 02:45:28.004627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:16.950 [2024-12-07 02:45:28.004681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.950 [2024-12-07 02:45:28.004704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:16.950 [2024-12-07 02:45:28.004715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.950 [2024-12-07 02:45:28.007221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.950 [2024-12-07 02:45:28.007310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:16.950 BaseBdev2 00:12:16.950 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.950 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:16.950 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.950 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.210 spare_malloc 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.210 spare_delay 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.210 [2024-12-07 02:45:28.051787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:17.210 [2024-12-07 02:45:28.051840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.210 [2024-12-07 02:45:28.051863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:17.210 [2024-12-07 02:45:28.051873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.210 [2024-12-07 02:45:28.054276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.210 [2024-12-07 02:45:28.054311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:17.210 spare 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.210 [2024-12-07 02:45:28.063826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.210 [2024-12-07 02:45:28.065936] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.210 [2024-12-07 02:45:28.066119] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:17.210 [2024-12-07 02:45:28.066164] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:17.210 [2024-12-07 02:45:28.066425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:12:17.210 [2024-12-07 02:45:28.066618] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:17.210 [2024-12-07 02:45:28.066664] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:17.210 [2024-12-07 02:45:28.066835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.210 "name": "raid_bdev1", 00:12:17.210 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:17.210 "strip_size_kb": 0, 00:12:17.210 "state": "online", 00:12:17.210 "raid_level": "raid1", 00:12:17.210 "superblock": true, 00:12:17.210 "num_base_bdevs": 2, 00:12:17.210 "num_base_bdevs_discovered": 2, 00:12:17.210 "num_base_bdevs_operational": 2, 00:12:17.210 "base_bdevs_list": [ 00:12:17.210 { 00:12:17.210 "name": "BaseBdev1", 00:12:17.210 "uuid": "f62c4e08-bd9f-5c79-87cf-26c31b5ae2a9", 00:12:17.210 "is_configured": true, 00:12:17.210 "data_offset": 2048, 00:12:17.210 "data_size": 63488 00:12:17.210 }, 00:12:17.210 { 00:12:17.210 "name": "BaseBdev2", 00:12:17.210 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:17.210 "is_configured": true, 00:12:17.210 "data_offset": 2048, 00:12:17.210 "data_size": 63488 00:12:17.210 } 00:12:17.210 ] 00:12:17.210 }' 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.210 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.469 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:17.469 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.469 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.469 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:17.469 [2024-12-07 02:45:28.467401] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.469 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.469 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:17.469 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:17.469 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.469 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.469 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.469 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.730 [2024-12-07 02:45:28.554989] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.730 "name": "raid_bdev1", 00:12:17.730 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:17.730 "strip_size_kb": 0, 00:12:17.730 "state": "online", 00:12:17.730 "raid_level": "raid1", 00:12:17.730 "superblock": true, 00:12:17.730 "num_base_bdevs": 2, 00:12:17.730 "num_base_bdevs_discovered": 1, 00:12:17.730 "num_base_bdevs_operational": 1, 00:12:17.730 "base_bdevs_list": [ 00:12:17.730 { 00:12:17.730 "name": null, 00:12:17.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.730 "is_configured": false, 00:12:17.730 "data_offset": 0, 00:12:17.730 "data_size": 63488 00:12:17.730 }, 00:12:17.730 { 00:12:17.730 "name": "BaseBdev2", 00:12:17.730 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:17.730 "is_configured": true, 00:12:17.730 "data_offset": 2048, 00:12:17.730 "data_size": 63488 00:12:17.730 } 00:12:17.730 ] 00:12:17.730 }' 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.730 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.730 [2024-12-07 02:45:28.650261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:17.730 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:17.730 Zero copy mechanism will not be used. 00:12:17.730 Running I/O for 60 seconds... 00:12:17.990 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:17.990 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.990 02:45:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.990 [2024-12-07 02:45:28.986416] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:17.990 02:45:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.990 02:45:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:17.991 [2024-12-07 02:45:29.033117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:17.991 [2024-12-07 02:45:29.035374] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:18.251 [2024-12-07 02:45:29.152854] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:18.251 [2024-12-07 02:45:29.153596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:18.511 [2024-12-07 02:45:29.374445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:18.511 [2024-12-07 02:45:29.374984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:18.770 216.00 IOPS, 648.00 MiB/s [2024-12-07T02:45:29.848Z] [2024-12-07 02:45:29.734051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:18.770 [2024-12-07 02:45:29.734444] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:19.030 [2024-12-07 02:45:29.946127] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:19.030 [2024-12-07 02:45:29.946869] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.031 [2024-12-07 02:45:30.066665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.031 "name": "raid_bdev1", 00:12:19.031 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:19.031 "strip_size_kb": 0, 00:12:19.031 "state": "online", 00:12:19.031 "raid_level": "raid1", 00:12:19.031 "superblock": true, 00:12:19.031 "num_base_bdevs": 2, 00:12:19.031 "num_base_bdevs_discovered": 2, 00:12:19.031 "num_base_bdevs_operational": 2, 00:12:19.031 "process": { 00:12:19.031 "type": "rebuild", 00:12:19.031 "target": "spare", 00:12:19.031 "progress": { 00:12:19.031 "blocks": 14336, 00:12:19.031 "percent": 22 00:12:19.031 } 00:12:19.031 }, 00:12:19.031 "base_bdevs_list": [ 00:12:19.031 { 00:12:19.031 "name": "spare", 00:12:19.031 "uuid": "c1c94804-387a-5ad5-90c3-d0eafbf14db4", 00:12:19.031 "is_configured": true, 00:12:19.031 "data_offset": 2048, 00:12:19.031 "data_size": 63488 00:12:19.031 }, 00:12:19.031 { 00:12:19.031 "name": "BaseBdev2", 00:12:19.031 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:19.031 "is_configured": true, 00:12:19.031 "data_offset": 2048, 00:12:19.031 "data_size": 63488 00:12:19.031 } 00:12:19.031 ] 00:12:19.031 }' 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:19.031 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.291 [2024-12-07 02:45:30.155290] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:19.291 [2024-12-07 02:45:30.173615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:19.291 [2024-12-07 02:45:30.173970] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:19.291 [2024-12-07 02:45:30.275748] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:19.291 [2024-12-07 02:45:30.282702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.291 [2024-12-07 02:45:30.282735] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:19.291 [2024-12-07 02:45:30.282748] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:19.291 [2024-12-07 02:45:30.297538] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.291 "name": "raid_bdev1", 00:12:19.291 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:19.291 "strip_size_kb": 0, 00:12:19.291 "state": "online", 00:12:19.291 "raid_level": "raid1", 00:12:19.291 "superblock": true, 00:12:19.291 "num_base_bdevs": 2, 00:12:19.291 "num_base_bdevs_discovered": 1, 00:12:19.291 "num_base_bdevs_operational": 1, 00:12:19.291 "base_bdevs_list": [ 00:12:19.291 { 00:12:19.291 "name": null, 00:12:19.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.291 "is_configured": false, 00:12:19.291 "data_offset": 0, 00:12:19.291 "data_size": 63488 00:12:19.291 }, 00:12:19.291 { 00:12:19.291 "name": "BaseBdev2", 00:12:19.291 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:19.291 "is_configured": true, 00:12:19.291 "data_offset": 2048, 00:12:19.291 "data_size": 63488 00:12:19.291 } 00:12:19.291 ] 00:12:19.291 }' 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.291 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.861 181.00 IOPS, 543.00 MiB/s [2024-12-07T02:45:30.939Z] 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:19.861 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.861 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:19.861 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:19.861 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.861 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.861 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.861 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.861 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.861 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.861 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.861 "name": "raid_bdev1", 00:12:19.861 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:19.861 "strip_size_kb": 0, 00:12:19.862 "state": "online", 00:12:19.862 "raid_level": "raid1", 00:12:19.862 "superblock": true, 00:12:19.862 "num_base_bdevs": 2, 00:12:19.862 "num_base_bdevs_discovered": 1, 00:12:19.862 "num_base_bdevs_operational": 1, 00:12:19.862 "base_bdevs_list": [ 00:12:19.862 { 00:12:19.862 "name": null, 00:12:19.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.862 "is_configured": false, 00:12:19.862 "data_offset": 0, 00:12:19.862 "data_size": 63488 00:12:19.862 }, 00:12:19.862 { 00:12:19.862 "name": "BaseBdev2", 00:12:19.862 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:19.862 "is_configured": true, 00:12:19.862 "data_offset": 2048, 00:12:19.862 "data_size": 63488 00:12:19.862 } 00:12:19.862 ] 00:12:19.862 }' 00:12:19.862 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.862 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:19.862 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.862 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:19.862 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:19.862 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.862 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.862 [2024-12-07 02:45:30.925469] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:20.122 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.122 02:45:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:20.122 [2024-12-07 02:45:30.971331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:20.122 [2024-12-07 02:45:30.973577] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:20.122 [2024-12-07 02:45:31.094610] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:20.122 [2024-12-07 02:45:31.100387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:20.381 [2024-12-07 02:45:31.320540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:20.381 [2024-12-07 02:45:31.321080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:20.641 172.33 IOPS, 517.00 MiB/s [2024-12-07T02:45:31.719Z] [2024-12-07 02:45:31.651718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:20.902 [2024-12-07 02:45:31.769893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:20.902 02:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:20.902 02:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:20.902 02:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:20.902 02:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:20.902 02:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:20.902 02:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.902 02:45:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.902 02:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.902 02:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.162 02:45:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.162 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.162 "name": "raid_bdev1", 00:12:21.162 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:21.162 "strip_size_kb": 0, 00:12:21.162 "state": "online", 00:12:21.162 "raid_level": "raid1", 00:12:21.162 "superblock": true, 00:12:21.162 "num_base_bdevs": 2, 00:12:21.162 "num_base_bdevs_discovered": 2, 00:12:21.162 "num_base_bdevs_operational": 2, 00:12:21.162 "process": { 00:12:21.162 "type": "rebuild", 00:12:21.162 "target": "spare", 00:12:21.162 "progress": { 00:12:21.162 "blocks": 10240, 00:12:21.162 "percent": 16 00:12:21.162 } 00:12:21.162 }, 00:12:21.162 "base_bdevs_list": [ 00:12:21.162 { 00:12:21.162 "name": "spare", 00:12:21.162 "uuid": "c1c94804-387a-5ad5-90c3-d0eafbf14db4", 00:12:21.162 "is_configured": true, 00:12:21.162 "data_offset": 2048, 00:12:21.162 "data_size": 63488 00:12:21.162 }, 00:12:21.162 { 00:12:21.162 "name": "BaseBdev2", 00:12:21.162 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:21.162 "is_configured": true, 00:12:21.162 "data_offset": 2048, 00:12:21.162 "data_size": 63488 00:12:21.162 } 00:12:21.162 ] 00:12:21.162 }' 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:21.163 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=345 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.163 [2024-12-07 02:45:32.110102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:21.163 [2024-12-07 02:45:32.110812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.163 "name": "raid_bdev1", 00:12:21.163 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:21.163 "strip_size_kb": 0, 00:12:21.163 "state": "online", 00:12:21.163 "raid_level": "raid1", 00:12:21.163 "superblock": true, 00:12:21.163 "num_base_bdevs": 2, 00:12:21.163 "num_base_bdevs_discovered": 2, 00:12:21.163 "num_base_bdevs_operational": 2, 00:12:21.163 "process": { 00:12:21.163 "type": "rebuild", 00:12:21.163 "target": "spare", 00:12:21.163 "progress": { 00:12:21.163 "blocks": 12288, 00:12:21.163 "percent": 19 00:12:21.163 } 00:12:21.163 }, 00:12:21.163 "base_bdevs_list": [ 00:12:21.163 { 00:12:21.163 "name": "spare", 00:12:21.163 "uuid": "c1c94804-387a-5ad5-90c3-d0eafbf14db4", 00:12:21.163 "is_configured": true, 00:12:21.163 "data_offset": 2048, 00:12:21.163 "data_size": 63488 00:12:21.163 }, 00:12:21.163 { 00:12:21.163 "name": "BaseBdev2", 00:12:21.163 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:21.163 "is_configured": true, 00:12:21.163 "data_offset": 2048, 00:12:21.163 "data_size": 63488 00:12:21.163 } 00:12:21.163 ] 00:12:21.163 }' 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.163 02:45:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:21.423 [2024-12-07 02:45:32.331586] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:21.683 146.75 IOPS, 440.25 MiB/s [2024-12-07T02:45:32.761Z] [2024-12-07 02:45:32.662068] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:21.683 [2024-12-07 02:45:32.667998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:21.943 [2024-12-07 02:45:32.872552] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:22.203 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:22.203 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.203 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.203 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.203 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.203 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.203 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.203 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.203 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.203 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.203 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.466 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.466 "name": "raid_bdev1", 00:12:22.466 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:22.466 "strip_size_kb": 0, 00:12:22.466 "state": "online", 00:12:22.466 "raid_level": "raid1", 00:12:22.466 "superblock": true, 00:12:22.466 "num_base_bdevs": 2, 00:12:22.466 "num_base_bdevs_discovered": 2, 00:12:22.466 "num_base_bdevs_operational": 2, 00:12:22.466 "process": { 00:12:22.466 "type": "rebuild", 00:12:22.466 "target": "spare", 00:12:22.466 "progress": { 00:12:22.466 "blocks": 26624, 00:12:22.466 "percent": 41 00:12:22.466 } 00:12:22.466 }, 00:12:22.466 "base_bdevs_list": [ 00:12:22.466 { 00:12:22.466 "name": "spare", 00:12:22.466 "uuid": "c1c94804-387a-5ad5-90c3-d0eafbf14db4", 00:12:22.466 "is_configured": true, 00:12:22.466 "data_offset": 2048, 00:12:22.466 "data_size": 63488 00:12:22.466 }, 00:12:22.466 { 00:12:22.466 "name": "BaseBdev2", 00:12:22.466 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:22.466 "is_configured": true, 00:12:22.466 "data_offset": 2048, 00:12:22.466 "data_size": 63488 00:12:22.466 } 00:12:22.466 ] 00:12:22.466 }' 00:12:22.466 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.466 [2024-12-07 02:45:33.307955] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:22.466 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.466 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.466 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.466 02:45:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:22.727 [2024-12-07 02:45:33.634836] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:23.297 129.40 IOPS, 388.20 MiB/s [2024-12-07T02:45:34.376Z] [2024-12-07 02:45:34.191463] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.558 "name": "raid_bdev1", 00:12:23.558 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:23.558 "strip_size_kb": 0, 00:12:23.558 "state": "online", 00:12:23.558 "raid_level": "raid1", 00:12:23.558 "superblock": true, 00:12:23.558 "num_base_bdevs": 2, 00:12:23.558 "num_base_bdevs_discovered": 2, 00:12:23.558 "num_base_bdevs_operational": 2, 00:12:23.558 "process": { 00:12:23.558 "type": "rebuild", 00:12:23.558 "target": "spare", 00:12:23.558 "progress": { 00:12:23.558 "blocks": 40960, 00:12:23.558 "percent": 64 00:12:23.558 } 00:12:23.558 }, 00:12:23.558 "base_bdevs_list": [ 00:12:23.558 { 00:12:23.558 "name": "spare", 00:12:23.558 "uuid": "c1c94804-387a-5ad5-90c3-d0eafbf14db4", 00:12:23.558 "is_configured": true, 00:12:23.558 "data_offset": 2048, 00:12:23.558 "data_size": 63488 00:12:23.558 }, 00:12:23.558 { 00:12:23.558 "name": "BaseBdev2", 00:12:23.558 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:23.558 "is_configured": true, 00:12:23.558 "data_offset": 2048, 00:12:23.558 "data_size": 63488 00:12:23.558 } 00:12:23.558 ] 00:12:23.558 }' 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.558 [2024-12-07 02:45:34.502574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.558 02:45:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:23.819 115.00 IOPS, 345.00 MiB/s [2024-12-07T02:45:34.897Z] [2024-12-07 02:45:34.704238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:23.819 [2024-12-07 02:45:34.704521] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:24.389 [2024-12-07 02:45:35.346510] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:24.389 [2024-12-07 02:45:35.453257] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:24.648 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.649 "name": "raid_bdev1", 00:12:24.649 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:24.649 "strip_size_kb": 0, 00:12:24.649 "state": "online", 00:12:24.649 "raid_level": "raid1", 00:12:24.649 "superblock": true, 00:12:24.649 "num_base_bdevs": 2, 00:12:24.649 "num_base_bdevs_discovered": 2, 00:12:24.649 "num_base_bdevs_operational": 2, 00:12:24.649 "process": { 00:12:24.649 "type": "rebuild", 00:12:24.649 "target": "spare", 00:12:24.649 "progress": { 00:12:24.649 "blocks": 59392, 00:12:24.649 "percent": 93 00:12:24.649 } 00:12:24.649 }, 00:12:24.649 "base_bdevs_list": [ 00:12:24.649 { 00:12:24.649 "name": "spare", 00:12:24.649 "uuid": "c1c94804-387a-5ad5-90c3-d0eafbf14db4", 00:12:24.649 "is_configured": true, 00:12:24.649 "data_offset": 2048, 00:12:24.649 "data_size": 63488 00:12:24.649 }, 00:12:24.649 { 00:12:24.649 "name": "BaseBdev2", 00:12:24.649 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:24.649 "is_configured": true, 00:12:24.649 "data_offset": 2048, 00:12:24.649 "data_size": 63488 00:12:24.649 } 00:12:24.649 ] 00:12:24.649 }' 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.649 102.29 IOPS, 306.86 MiB/s [2024-12-07T02:45:35.727Z] 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.649 [2024-12-07 02:45:35.677829] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.649 02:45:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:24.909 [2024-12-07 02:45:35.777617] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:24.909 [2024-12-07 02:45:35.779369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.848 94.38 IOPS, 283.12 MiB/s [2024-12-07T02:45:36.926Z] 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.848 "name": "raid_bdev1", 00:12:25.848 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:25.848 "strip_size_kb": 0, 00:12:25.848 "state": "online", 00:12:25.848 "raid_level": "raid1", 00:12:25.848 "superblock": true, 00:12:25.848 "num_base_bdevs": 2, 00:12:25.848 "num_base_bdevs_discovered": 2, 00:12:25.848 "num_base_bdevs_operational": 2, 00:12:25.848 "base_bdevs_list": [ 00:12:25.848 { 00:12:25.848 "name": "spare", 00:12:25.848 "uuid": "c1c94804-387a-5ad5-90c3-d0eafbf14db4", 00:12:25.848 "is_configured": true, 00:12:25.848 "data_offset": 2048, 00:12:25.848 "data_size": 63488 00:12:25.848 }, 00:12:25.848 { 00:12:25.848 "name": "BaseBdev2", 00:12:25.848 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:25.848 "is_configured": true, 00:12:25.848 "data_offset": 2048, 00:12:25.848 "data_size": 63488 00:12:25.848 } 00:12:25.848 ] 00:12:25.848 }' 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:25.848 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.849 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:25.849 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:25.849 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.849 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.849 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.849 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.849 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.849 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.849 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.849 "name": "raid_bdev1", 00:12:25.849 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:25.849 "strip_size_kb": 0, 00:12:25.849 "state": "online", 00:12:25.849 "raid_level": "raid1", 00:12:25.849 "superblock": true, 00:12:25.849 "num_base_bdevs": 2, 00:12:25.849 "num_base_bdevs_discovered": 2, 00:12:25.849 "num_base_bdevs_operational": 2, 00:12:25.849 "base_bdevs_list": [ 00:12:25.849 { 00:12:25.849 "name": "spare", 00:12:25.849 "uuid": "c1c94804-387a-5ad5-90c3-d0eafbf14db4", 00:12:25.849 "is_configured": true, 00:12:25.849 "data_offset": 2048, 00:12:25.849 "data_size": 63488 00:12:25.849 }, 00:12:25.849 { 00:12:25.849 "name": "BaseBdev2", 00:12:25.849 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:25.849 "is_configured": true, 00:12:25.849 "data_offset": 2048, 00:12:25.849 "data_size": 63488 00:12:25.849 } 00:12:25.849 ] 00:12:25.849 }' 00:12:25.849 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.109 02:45:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.109 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.109 "name": "raid_bdev1", 00:12:26.109 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:26.109 "strip_size_kb": 0, 00:12:26.109 "state": "online", 00:12:26.109 "raid_level": "raid1", 00:12:26.109 "superblock": true, 00:12:26.109 "num_base_bdevs": 2, 00:12:26.109 "num_base_bdevs_discovered": 2, 00:12:26.109 "num_base_bdevs_operational": 2, 00:12:26.109 "base_bdevs_list": [ 00:12:26.109 { 00:12:26.109 "name": "spare", 00:12:26.109 "uuid": "c1c94804-387a-5ad5-90c3-d0eafbf14db4", 00:12:26.109 "is_configured": true, 00:12:26.109 "data_offset": 2048, 00:12:26.109 "data_size": 63488 00:12:26.109 }, 00:12:26.109 { 00:12:26.109 "name": "BaseBdev2", 00:12:26.109 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:26.109 "is_configured": true, 00:12:26.109 "data_offset": 2048, 00:12:26.109 "data_size": 63488 00:12:26.109 } 00:12:26.109 ] 00:12:26.109 }' 00:12:26.109 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.109 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.369 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:26.369 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.369 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.369 [2024-12-07 02:45:37.424235] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.369 [2024-12-07 02:45:37.424322] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.630 00:12:26.630 Latency(us) 00:12:26.630 [2024-12-07T02:45:37.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.630 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:26.630 raid_bdev1 : 8.87 88.70 266.11 0.00 0.00 15296.19 257.57 113099.68 00:12:26.630 [2024-12-07T02:45:37.708Z] =================================================================================================================== 00:12:26.630 [2024-12-07T02:45:37.708Z] Total : 88.70 266.11 0.00 0.00 15296.19 257.57 113099.68 00:12:26.630 [2024-12-07 02:45:37.511508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.630 [2024-12-07 02:45:37.511594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.630 [2024-12-07 02:45:37.511720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.630 [2024-12-07 02:45:37.511803] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:26.630 { 00:12:26.630 "results": [ 00:12:26.630 { 00:12:26.630 "job": "raid_bdev1", 00:12:26.630 "core_mask": "0x1", 00:12:26.630 "workload": "randrw", 00:12:26.630 "percentage": 50, 00:12:26.630 "status": "finished", 00:12:26.630 "queue_depth": 2, 00:12:26.630 "io_size": 3145728, 00:12:26.630 "runtime": 8.87236, 00:12:26.630 "iops": 88.70244219125463, 00:12:26.630 "mibps": 266.1073265737639, 00:12:26.630 "io_failed": 0, 00:12:26.630 "io_timeout": 0, 00:12:26.630 "avg_latency_us": 15296.193964144422, 00:12:26.630 "min_latency_us": 257.5650655021834, 00:12:26.630 "max_latency_us": 113099.68209606987 00:12:26.630 } 00:12:26.630 ], 00:12:26.630 "core_count": 1 00:12:26.630 } 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:26.630 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:26.891 /dev/nbd0 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.891 1+0 records in 00:12:26.891 1+0 records out 00:12:26.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643742 s, 6.4 MB/s 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:26.891 02:45:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:27.151 /dev/nbd1 00:12:27.151 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:27.151 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:27.151 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:27.152 1+0 records in 00:12:27.152 1+0 records out 00:12:27.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654107 s, 6.3 MB/s 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.152 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.412 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:27.672 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:27.672 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:27.672 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:27.672 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.672 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.672 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:27.672 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:27.672 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.672 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:27.672 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:27.672 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.672 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.673 [2024-12-07 02:45:38.600910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:27.673 [2024-12-07 02:45:38.601006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.673 [2024-12-07 02:45:38.601051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:27.673 [2024-12-07 02:45:38.601077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.673 [2024-12-07 02:45:38.603645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.673 [2024-12-07 02:45:38.603712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:27.673 [2024-12-07 02:45:38.603844] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:27.673 [2024-12-07 02:45:38.603906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:27.673 [2024-12-07 02:45:38.604074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.673 spare 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.673 [2024-12-07 02:45:38.704024] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:12:27.673 [2024-12-07 02:45:38.704051] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:27.673 [2024-12-07 02:45:38.704318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:12:27.673 [2024-12-07 02:45:38.704459] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:12:27.673 [2024-12-07 02:45:38.704468] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:12:27.673 [2024-12-07 02:45:38.704623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.673 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.933 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.933 "name": "raid_bdev1", 00:12:27.933 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:27.933 "strip_size_kb": 0, 00:12:27.933 "state": "online", 00:12:27.933 "raid_level": "raid1", 00:12:27.933 "superblock": true, 00:12:27.933 "num_base_bdevs": 2, 00:12:27.933 "num_base_bdevs_discovered": 2, 00:12:27.933 "num_base_bdevs_operational": 2, 00:12:27.933 "base_bdevs_list": [ 00:12:27.933 { 00:12:27.933 "name": "spare", 00:12:27.933 "uuid": "c1c94804-387a-5ad5-90c3-d0eafbf14db4", 00:12:27.933 "is_configured": true, 00:12:27.933 "data_offset": 2048, 00:12:27.933 "data_size": 63488 00:12:27.933 }, 00:12:27.933 { 00:12:27.933 "name": "BaseBdev2", 00:12:27.933 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:27.933 "is_configured": true, 00:12:27.933 "data_offset": 2048, 00:12:27.933 "data_size": 63488 00:12:27.933 } 00:12:27.933 ] 00:12:27.933 }' 00:12:27.933 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.933 02:45:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.193 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:28.193 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.193 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:28.193 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:28.193 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.193 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.193 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.193 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.193 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.193 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.193 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.193 "name": "raid_bdev1", 00:12:28.193 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:28.193 "strip_size_kb": 0, 00:12:28.193 "state": "online", 00:12:28.193 "raid_level": "raid1", 00:12:28.193 "superblock": true, 00:12:28.193 "num_base_bdevs": 2, 00:12:28.193 "num_base_bdevs_discovered": 2, 00:12:28.193 "num_base_bdevs_operational": 2, 00:12:28.193 "base_bdevs_list": [ 00:12:28.193 { 00:12:28.193 "name": "spare", 00:12:28.193 "uuid": "c1c94804-387a-5ad5-90c3-d0eafbf14db4", 00:12:28.193 "is_configured": true, 00:12:28.193 "data_offset": 2048, 00:12:28.193 "data_size": 63488 00:12:28.193 }, 00:12:28.193 { 00:12:28.193 "name": "BaseBdev2", 00:12:28.193 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:28.193 "is_configured": true, 00:12:28.193 "data_offset": 2048, 00:12:28.193 "data_size": 63488 00:12:28.193 } 00:12:28.193 ] 00:12:28.193 }' 00:12:28.193 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.193 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:28.453 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.453 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:28.453 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.453 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.453 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.453 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:28.453 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.453 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.454 [2024-12-07 02:45:39.375704] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.454 "name": "raid_bdev1", 00:12:28.454 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:28.454 "strip_size_kb": 0, 00:12:28.454 "state": "online", 00:12:28.454 "raid_level": "raid1", 00:12:28.454 "superblock": true, 00:12:28.454 "num_base_bdevs": 2, 00:12:28.454 "num_base_bdevs_discovered": 1, 00:12:28.454 "num_base_bdevs_operational": 1, 00:12:28.454 "base_bdevs_list": [ 00:12:28.454 { 00:12:28.454 "name": null, 00:12:28.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.454 "is_configured": false, 00:12:28.454 "data_offset": 0, 00:12:28.454 "data_size": 63488 00:12:28.454 }, 00:12:28.454 { 00:12:28.454 "name": "BaseBdev2", 00:12:28.454 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:28.454 "is_configured": true, 00:12:28.454 "data_offset": 2048, 00:12:28.454 "data_size": 63488 00:12:28.454 } 00:12:28.454 ] 00:12:28.454 }' 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.454 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.023 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:29.023 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.023 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.023 [2024-12-07 02:45:39.827109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:29.023 [2024-12-07 02:45:39.827334] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:29.023 [2024-12-07 02:45:39.827406] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:29.023 [2024-12-07 02:45:39.827495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:29.023 [2024-12-07 02:45:39.835415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b000 00:12:29.023 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.023 02:45:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:29.023 [2024-12-07 02:45:39.837702] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.963 "name": "raid_bdev1", 00:12:29.963 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:29.963 "strip_size_kb": 0, 00:12:29.963 "state": "online", 00:12:29.963 "raid_level": "raid1", 00:12:29.963 "superblock": true, 00:12:29.963 "num_base_bdevs": 2, 00:12:29.963 "num_base_bdevs_discovered": 2, 00:12:29.963 "num_base_bdevs_operational": 2, 00:12:29.963 "process": { 00:12:29.963 "type": "rebuild", 00:12:29.963 "target": "spare", 00:12:29.963 "progress": { 00:12:29.963 "blocks": 20480, 00:12:29.963 "percent": 32 00:12:29.963 } 00:12:29.963 }, 00:12:29.963 "base_bdevs_list": [ 00:12:29.963 { 00:12:29.963 "name": "spare", 00:12:29.963 "uuid": "c1c94804-387a-5ad5-90c3-d0eafbf14db4", 00:12:29.963 "is_configured": true, 00:12:29.963 "data_offset": 2048, 00:12:29.963 "data_size": 63488 00:12:29.963 }, 00:12:29.963 { 00:12:29.963 "name": "BaseBdev2", 00:12:29.963 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:29.963 "is_configured": true, 00:12:29.963 "data_offset": 2048, 00:12:29.963 "data_size": 63488 00:12:29.963 } 00:12:29.963 ] 00:12:29.963 }' 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.963 02:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.963 [2024-12-07 02:45:41.001545] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:30.222 [2024-12-07 02:45:41.045112] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:30.222 [2024-12-07 02:45:41.045171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.222 [2024-12-07 02:45:41.045185] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:30.222 [2024-12-07 02:45:41.045194] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.222 "name": "raid_bdev1", 00:12:30.222 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:30.222 "strip_size_kb": 0, 00:12:30.222 "state": "online", 00:12:30.222 "raid_level": "raid1", 00:12:30.222 "superblock": true, 00:12:30.222 "num_base_bdevs": 2, 00:12:30.222 "num_base_bdevs_discovered": 1, 00:12:30.222 "num_base_bdevs_operational": 1, 00:12:30.222 "base_bdevs_list": [ 00:12:30.222 { 00:12:30.222 "name": null, 00:12:30.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.222 "is_configured": false, 00:12:30.222 "data_offset": 0, 00:12:30.222 "data_size": 63488 00:12:30.222 }, 00:12:30.222 { 00:12:30.222 "name": "BaseBdev2", 00:12:30.222 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:30.222 "is_configured": true, 00:12:30.222 "data_offset": 2048, 00:12:30.222 "data_size": 63488 00:12:30.222 } 00:12:30.222 ] 00:12:30.222 }' 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.222 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.481 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:30.481 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.481 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.481 [2024-12-07 02:45:41.547898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:30.481 [2024-12-07 02:45:41.548006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.481 [2024-12-07 02:45:41.548046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:30.481 [2024-12-07 02:45:41.548077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.481 [2024-12-07 02:45:41.548595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.481 [2024-12-07 02:45:41.548654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:30.481 [2024-12-07 02:45:41.548768] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:30.481 [2024-12-07 02:45:41.548817] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:30.481 [2024-12-07 02:45:41.548864] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:30.481 [2024-12-07 02:45:41.548944] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.481 [2024-12-07 02:45:41.555787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:30.481 spare 00:12:30.481 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.481 02:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:30.740 [2024-12-07 02:45:41.558050] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:31.739 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.739 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.739 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.739 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.739 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.739 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.740 "name": "raid_bdev1", 00:12:31.740 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:31.740 "strip_size_kb": 0, 00:12:31.740 "state": "online", 00:12:31.740 "raid_level": "raid1", 00:12:31.740 "superblock": true, 00:12:31.740 "num_base_bdevs": 2, 00:12:31.740 "num_base_bdevs_discovered": 2, 00:12:31.740 "num_base_bdevs_operational": 2, 00:12:31.740 "process": { 00:12:31.740 "type": "rebuild", 00:12:31.740 "target": "spare", 00:12:31.740 "progress": { 00:12:31.740 "blocks": 20480, 00:12:31.740 "percent": 32 00:12:31.740 } 00:12:31.740 }, 00:12:31.740 "base_bdevs_list": [ 00:12:31.740 { 00:12:31.740 "name": "spare", 00:12:31.740 "uuid": "c1c94804-387a-5ad5-90c3-d0eafbf14db4", 00:12:31.740 "is_configured": true, 00:12:31.740 "data_offset": 2048, 00:12:31.740 "data_size": 63488 00:12:31.740 }, 00:12:31.740 { 00:12:31.740 "name": "BaseBdev2", 00:12:31.740 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:31.740 "is_configured": true, 00:12:31.740 "data_offset": 2048, 00:12:31.740 "data_size": 63488 00:12:31.740 } 00:12:31.740 ] 00:12:31.740 }' 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.740 [2024-12-07 02:45:42.722142] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.740 [2024-12-07 02:45:42.765749] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:31.740 [2024-12-07 02:45:42.765806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.740 [2024-12-07 02:45:42.765823] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.740 [2024-12-07 02:45:42.765831] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.740 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.999 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.999 "name": "raid_bdev1", 00:12:31.999 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:31.999 "strip_size_kb": 0, 00:12:31.999 "state": "online", 00:12:31.999 "raid_level": "raid1", 00:12:31.999 "superblock": true, 00:12:31.999 "num_base_bdevs": 2, 00:12:31.999 "num_base_bdevs_discovered": 1, 00:12:31.999 "num_base_bdevs_operational": 1, 00:12:31.999 "base_bdevs_list": [ 00:12:31.999 { 00:12:31.999 "name": null, 00:12:31.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.999 "is_configured": false, 00:12:31.999 "data_offset": 0, 00:12:31.999 "data_size": 63488 00:12:31.999 }, 00:12:31.999 { 00:12:31.999 "name": "BaseBdev2", 00:12:31.999 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:31.999 "is_configured": true, 00:12:31.999 "data_offset": 2048, 00:12:31.999 "data_size": 63488 00:12:31.999 } 00:12:31.999 ] 00:12:31.999 }' 00:12:31.999 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.999 02:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.259 "name": "raid_bdev1", 00:12:32.259 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:32.259 "strip_size_kb": 0, 00:12:32.259 "state": "online", 00:12:32.259 "raid_level": "raid1", 00:12:32.259 "superblock": true, 00:12:32.259 "num_base_bdevs": 2, 00:12:32.259 "num_base_bdevs_discovered": 1, 00:12:32.259 "num_base_bdevs_operational": 1, 00:12:32.259 "base_bdevs_list": [ 00:12:32.259 { 00:12:32.259 "name": null, 00:12:32.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.259 "is_configured": false, 00:12:32.259 "data_offset": 0, 00:12:32.259 "data_size": 63488 00:12:32.259 }, 00:12:32.259 { 00:12:32.259 "name": "BaseBdev2", 00:12:32.259 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:32.259 "is_configured": true, 00:12:32.259 "data_offset": 2048, 00:12:32.259 "data_size": 63488 00:12:32.259 } 00:12:32.259 ] 00:12:32.259 }' 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.259 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.519 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.519 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:32.519 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.519 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.519 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.519 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:32.519 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.519 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.519 [2024-12-07 02:45:43.392438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:32.519 [2024-12-07 02:45:43.392514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.519 [2024-12-07 02:45:43.392540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:32.519 [2024-12-07 02:45:43.392550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.519 [2024-12-07 02:45:43.393062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.519 [2024-12-07 02:45:43.393088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:32.519 [2024-12-07 02:45:43.393175] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:32.519 [2024-12-07 02:45:43.393190] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:32.519 [2024-12-07 02:45:43.393212] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:32.519 [2024-12-07 02:45:43.393224] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:32.519 BaseBdev1 00:12:32.519 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.519 02:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.460 "name": "raid_bdev1", 00:12:33.460 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:33.460 "strip_size_kb": 0, 00:12:33.460 "state": "online", 00:12:33.460 "raid_level": "raid1", 00:12:33.460 "superblock": true, 00:12:33.460 "num_base_bdevs": 2, 00:12:33.460 "num_base_bdevs_discovered": 1, 00:12:33.460 "num_base_bdevs_operational": 1, 00:12:33.460 "base_bdevs_list": [ 00:12:33.460 { 00:12:33.460 "name": null, 00:12:33.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.460 "is_configured": false, 00:12:33.460 "data_offset": 0, 00:12:33.460 "data_size": 63488 00:12:33.460 }, 00:12:33.460 { 00:12:33.460 "name": "BaseBdev2", 00:12:33.460 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:33.460 "is_configured": true, 00:12:33.460 "data_offset": 2048, 00:12:33.460 "data_size": 63488 00:12:33.460 } 00:12:33.460 ] 00:12:33.460 }' 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.460 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.030 "name": "raid_bdev1", 00:12:34.030 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:34.030 "strip_size_kb": 0, 00:12:34.030 "state": "online", 00:12:34.030 "raid_level": "raid1", 00:12:34.030 "superblock": true, 00:12:34.030 "num_base_bdevs": 2, 00:12:34.030 "num_base_bdevs_discovered": 1, 00:12:34.030 "num_base_bdevs_operational": 1, 00:12:34.030 "base_bdevs_list": [ 00:12:34.030 { 00:12:34.030 "name": null, 00:12:34.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.030 "is_configured": false, 00:12:34.030 "data_offset": 0, 00:12:34.030 "data_size": 63488 00:12:34.030 }, 00:12:34.030 { 00:12:34.030 "name": "BaseBdev2", 00:12:34.030 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:34.030 "is_configured": true, 00:12:34.030 "data_offset": 2048, 00:12:34.030 "data_size": 63488 00:12:34.030 } 00:12:34.030 ] 00:12:34.030 }' 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:34.030 02:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.030 [2024-12-07 02:45:45.041857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.030 [2024-12-07 02:45:45.042073] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:34.030 [2024-12-07 02:45:45.042089] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:34.030 request: 00:12:34.030 { 00:12:34.030 "base_bdev": "BaseBdev1", 00:12:34.030 "raid_bdev": "raid_bdev1", 00:12:34.030 "method": "bdev_raid_add_base_bdev", 00:12:34.030 "req_id": 1 00:12:34.030 } 00:12:34.030 Got JSON-RPC error response 00:12:34.030 response: 00:12:34.030 { 00:12:34.030 "code": -22, 00:12:34.030 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:34.030 } 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:34.030 02:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.409 "name": "raid_bdev1", 00:12:35.409 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:35.409 "strip_size_kb": 0, 00:12:35.409 "state": "online", 00:12:35.409 "raid_level": "raid1", 00:12:35.409 "superblock": true, 00:12:35.409 "num_base_bdevs": 2, 00:12:35.409 "num_base_bdevs_discovered": 1, 00:12:35.409 "num_base_bdevs_operational": 1, 00:12:35.409 "base_bdevs_list": [ 00:12:35.409 { 00:12:35.409 "name": null, 00:12:35.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.409 "is_configured": false, 00:12:35.409 "data_offset": 0, 00:12:35.409 "data_size": 63488 00:12:35.409 }, 00:12:35.409 { 00:12:35.409 "name": "BaseBdev2", 00:12:35.409 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:35.409 "is_configured": true, 00:12:35.409 "data_offset": 2048, 00:12:35.409 "data_size": 63488 00:12:35.409 } 00:12:35.409 ] 00:12:35.409 }' 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.409 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.668 "name": "raid_bdev1", 00:12:35.668 "uuid": "1ee5af8a-141f-4cf4-8e83-3defb97f6e6a", 00:12:35.668 "strip_size_kb": 0, 00:12:35.668 "state": "online", 00:12:35.668 "raid_level": "raid1", 00:12:35.668 "superblock": true, 00:12:35.668 "num_base_bdevs": 2, 00:12:35.668 "num_base_bdevs_discovered": 1, 00:12:35.668 "num_base_bdevs_operational": 1, 00:12:35.668 "base_bdevs_list": [ 00:12:35.668 { 00:12:35.668 "name": null, 00:12:35.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.668 "is_configured": false, 00:12:35.668 "data_offset": 0, 00:12:35.668 "data_size": 63488 00:12:35.668 }, 00:12:35.668 { 00:12:35.668 "name": "BaseBdev2", 00:12:35.668 "uuid": "21062ad8-3796-54d0-bb09-586f666c7af2", 00:12:35.668 "is_configured": true, 00:12:35.668 "data_offset": 2048, 00:12:35.668 "data_size": 63488 00:12:35.668 } 00:12:35.668 ] 00:12:35.668 }' 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87755 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87755 ']' 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87755 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87755 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:35.668 killing process with pid 87755 00:12:35.668 Received shutdown signal, test time was about 18.034148 seconds 00:12:35.668 00:12:35.668 Latency(us) 00:12:35.668 [2024-12-07T02:45:46.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.668 [2024-12-07T02:45:46.746Z] =================================================================================================================== 00:12:35.668 [2024-12-07T02:45:46.746Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87755' 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87755 00:12:35.668 [2024-12-07 02:45:46.652129] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.668 02:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87755 00:12:35.668 [2024-12-07 02:45:46.652305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.668 [2024-12-07 02:45:46.652366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.668 [2024-12-07 02:45:46.652391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:12:35.668 [2024-12-07 02:45:46.701171] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:36.238 00:12:36.238 real 0m20.055s 00:12:36.238 user 0m26.269s 00:12:36.238 sys 0m2.385s 00:12:36.238 ************************************ 00:12:36.238 END TEST raid_rebuild_test_sb_io 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.238 ************************************ 00:12:36.238 02:45:47 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:36.238 02:45:47 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:36.238 02:45:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:36.238 02:45:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:36.238 02:45:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.238 ************************************ 00:12:36.238 START TEST raid_rebuild_test 00:12:36.238 ************************************ 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=88446 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 88446 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 88446 ']' 00:12:36.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:36.238 02:45:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.238 [2024-12-07 02:45:47.239274] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:36.238 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:36.238 Zero copy mechanism will not be used. 00:12:36.238 [2024-12-07 02:45:47.239472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88446 ] 00:12:36.498 [2024-12-07 02:45:47.400556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.498 [2024-12-07 02:45:47.468934] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.498 [2024-12-07 02:45:47.543995] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.498 [2024-12-07 02:45:47.544031] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.066 BaseBdev1_malloc 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.066 [2024-12-07 02:45:48.094008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:37.066 [2024-12-07 02:45:48.094080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.066 [2024-12-07 02:45:48.094110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:37.066 [2024-12-07 02:45:48.094135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.066 [2024-12-07 02:45:48.096608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.066 [2024-12-07 02:45:48.096642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:37.066 BaseBdev1 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.066 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.066 BaseBdev2_malloc 00:12:37.067 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.067 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:37.067 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.067 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.327 [2024-12-07 02:45:48.144078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:37.327 [2024-12-07 02:45:48.144278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.327 [2024-12-07 02:45:48.144335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:37.327 [2024-12-07 02:45:48.144358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.327 [2024-12-07 02:45:48.149389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.327 [2024-12-07 02:45:48.149459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:37.327 BaseBdev2 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.327 BaseBdev3_malloc 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.327 [2024-12-07 02:45:48.181348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:37.327 [2024-12-07 02:45:48.181393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.327 [2024-12-07 02:45:48.181421] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:37.327 [2024-12-07 02:45:48.181431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.327 [2024-12-07 02:45:48.183837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.327 [2024-12-07 02:45:48.183870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:37.327 BaseBdev3 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.327 BaseBdev4_malloc 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.327 [2024-12-07 02:45:48.215825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:37.327 [2024-12-07 02:45:48.215879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.327 [2024-12-07 02:45:48.215906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:37.327 [2024-12-07 02:45:48.215915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.327 [2024-12-07 02:45:48.218255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.327 [2024-12-07 02:45:48.218289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:37.327 BaseBdev4 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.327 spare_malloc 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.327 spare_delay 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.327 [2024-12-07 02:45:48.262436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:37.327 [2024-12-07 02:45:48.262486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.327 [2024-12-07 02:45:48.262508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:37.327 [2024-12-07 02:45:48.262516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.327 [2024-12-07 02:45:48.264980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.327 [2024-12-07 02:45:48.265015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:37.327 spare 00:12:37.327 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.328 [2024-12-07 02:45:48.274497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.328 [2024-12-07 02:45:48.276660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.328 [2024-12-07 02:45:48.276725] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.328 [2024-12-07 02:45:48.276766] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:37.328 [2024-12-07 02:45:48.276839] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:37.328 [2024-12-07 02:45:48.276848] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:37.328 [2024-12-07 02:45:48.277081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:37.328 [2024-12-07 02:45:48.277225] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:37.328 [2024-12-07 02:45:48.277238] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:37.328 [2024-12-07 02:45:48.277351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.328 "name": "raid_bdev1", 00:12:37.328 "uuid": "b305bfb3-4d9c-493b-ad0b-bdd44f6abcc0", 00:12:37.328 "strip_size_kb": 0, 00:12:37.328 "state": "online", 00:12:37.328 "raid_level": "raid1", 00:12:37.328 "superblock": false, 00:12:37.328 "num_base_bdevs": 4, 00:12:37.328 "num_base_bdevs_discovered": 4, 00:12:37.328 "num_base_bdevs_operational": 4, 00:12:37.328 "base_bdevs_list": [ 00:12:37.328 { 00:12:37.328 "name": "BaseBdev1", 00:12:37.328 "uuid": "0164a4f8-6bc6-5696-ab58-647b490b6f80", 00:12:37.328 "is_configured": true, 00:12:37.328 "data_offset": 0, 00:12:37.328 "data_size": 65536 00:12:37.328 }, 00:12:37.328 { 00:12:37.328 "name": "BaseBdev2", 00:12:37.328 "uuid": "604344c0-2a0d-5eae-9f42-76e076b0518b", 00:12:37.328 "is_configured": true, 00:12:37.328 "data_offset": 0, 00:12:37.328 "data_size": 65536 00:12:37.328 }, 00:12:37.328 { 00:12:37.328 "name": "BaseBdev3", 00:12:37.328 "uuid": "76cfc1cf-86e7-5227-960e-7f7120dcd369", 00:12:37.328 "is_configured": true, 00:12:37.328 "data_offset": 0, 00:12:37.328 "data_size": 65536 00:12:37.328 }, 00:12:37.328 { 00:12:37.328 "name": "BaseBdev4", 00:12:37.328 "uuid": "99cb9927-2eab-542e-9dda-40467fda27d6", 00:12:37.328 "is_configured": true, 00:12:37.328 "data_offset": 0, 00:12:37.328 "data_size": 65536 00:12:37.328 } 00:12:37.328 ] 00:12:37.328 }' 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.328 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.897 [2024-12-07 02:45:48.737998] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:37.897 02:45:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:37.898 02:45:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:37.898 02:45:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:37.898 02:45:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:38.157 [2024-12-07 02:45:49.013226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:38.157 /dev/nbd0 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.157 1+0 records in 00:12:38.157 1+0 records out 00:12:38.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413159 s, 9.9 MB/s 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:38.157 02:45:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:43.462 65536+0 records in 00:12:43.462 65536+0 records out 00:12:43.462 33554432 bytes (34 MB, 32 MiB) copied, 5.03222 s, 6.7 MB/s 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:43.462 [2024-12-07 02:45:54.314217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.462 [2024-12-07 02:45:54.362205] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.462 "name": "raid_bdev1", 00:12:43.462 "uuid": "b305bfb3-4d9c-493b-ad0b-bdd44f6abcc0", 00:12:43.462 "strip_size_kb": 0, 00:12:43.462 "state": "online", 00:12:43.462 "raid_level": "raid1", 00:12:43.462 "superblock": false, 00:12:43.462 "num_base_bdevs": 4, 00:12:43.462 "num_base_bdevs_discovered": 3, 00:12:43.462 "num_base_bdevs_operational": 3, 00:12:43.462 "base_bdevs_list": [ 00:12:43.462 { 00:12:43.462 "name": null, 00:12:43.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.462 "is_configured": false, 00:12:43.462 "data_offset": 0, 00:12:43.462 "data_size": 65536 00:12:43.462 }, 00:12:43.462 { 00:12:43.462 "name": "BaseBdev2", 00:12:43.462 "uuid": "604344c0-2a0d-5eae-9f42-76e076b0518b", 00:12:43.462 "is_configured": true, 00:12:43.462 "data_offset": 0, 00:12:43.462 "data_size": 65536 00:12:43.462 }, 00:12:43.462 { 00:12:43.462 "name": "BaseBdev3", 00:12:43.462 "uuid": "76cfc1cf-86e7-5227-960e-7f7120dcd369", 00:12:43.462 "is_configured": true, 00:12:43.462 "data_offset": 0, 00:12:43.462 "data_size": 65536 00:12:43.462 }, 00:12:43.462 { 00:12:43.462 "name": "BaseBdev4", 00:12:43.462 "uuid": "99cb9927-2eab-542e-9dda-40467fda27d6", 00:12:43.462 "is_configured": true, 00:12:43.462 "data_offset": 0, 00:12:43.462 "data_size": 65536 00:12:43.462 } 00:12:43.462 ] 00:12:43.462 }' 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.462 02:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.033 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:44.033 02:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.033 02:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.033 [2024-12-07 02:45:54.809483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.033 [2024-12-07 02:45:54.815432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:44.033 02:45:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.033 02:45:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:44.033 [2024-12-07 02:45:54.817791] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.969 "name": "raid_bdev1", 00:12:44.969 "uuid": "b305bfb3-4d9c-493b-ad0b-bdd44f6abcc0", 00:12:44.969 "strip_size_kb": 0, 00:12:44.969 "state": "online", 00:12:44.969 "raid_level": "raid1", 00:12:44.969 "superblock": false, 00:12:44.969 "num_base_bdevs": 4, 00:12:44.969 "num_base_bdevs_discovered": 4, 00:12:44.969 "num_base_bdevs_operational": 4, 00:12:44.969 "process": { 00:12:44.969 "type": "rebuild", 00:12:44.969 "target": "spare", 00:12:44.969 "progress": { 00:12:44.969 "blocks": 20480, 00:12:44.969 "percent": 31 00:12:44.969 } 00:12:44.969 }, 00:12:44.969 "base_bdevs_list": [ 00:12:44.969 { 00:12:44.969 "name": "spare", 00:12:44.969 "uuid": "f9ba38ac-d1bc-5f9f-8884-09a6e0c171b2", 00:12:44.969 "is_configured": true, 00:12:44.969 "data_offset": 0, 00:12:44.969 "data_size": 65536 00:12:44.969 }, 00:12:44.969 { 00:12:44.969 "name": "BaseBdev2", 00:12:44.969 "uuid": "604344c0-2a0d-5eae-9f42-76e076b0518b", 00:12:44.969 "is_configured": true, 00:12:44.969 "data_offset": 0, 00:12:44.969 "data_size": 65536 00:12:44.969 }, 00:12:44.969 { 00:12:44.969 "name": "BaseBdev3", 00:12:44.969 "uuid": "76cfc1cf-86e7-5227-960e-7f7120dcd369", 00:12:44.969 "is_configured": true, 00:12:44.969 "data_offset": 0, 00:12:44.969 "data_size": 65536 00:12:44.969 }, 00:12:44.969 { 00:12:44.969 "name": "BaseBdev4", 00:12:44.969 "uuid": "99cb9927-2eab-542e-9dda-40467fda27d6", 00:12:44.969 "is_configured": true, 00:12:44.969 "data_offset": 0, 00:12:44.969 "data_size": 65536 00:12:44.969 } 00:12:44.969 ] 00:12:44.969 }' 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.969 02:45:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.969 [2024-12-07 02:45:55.954219] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.969 [2024-12-07 02:45:56.026989] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:44.969 [2024-12-07 02:45:56.027140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.969 [2024-12-07 02:45:56.027168] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.969 [2024-12-07 02:45:56.027188] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.969 02:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.229 02:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.229 02:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.229 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.229 "name": "raid_bdev1", 00:12:45.229 "uuid": "b305bfb3-4d9c-493b-ad0b-bdd44f6abcc0", 00:12:45.229 "strip_size_kb": 0, 00:12:45.229 "state": "online", 00:12:45.229 "raid_level": "raid1", 00:12:45.229 "superblock": false, 00:12:45.229 "num_base_bdevs": 4, 00:12:45.229 "num_base_bdevs_discovered": 3, 00:12:45.229 "num_base_bdevs_operational": 3, 00:12:45.229 "base_bdevs_list": [ 00:12:45.229 { 00:12:45.229 "name": null, 00:12:45.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.229 "is_configured": false, 00:12:45.229 "data_offset": 0, 00:12:45.229 "data_size": 65536 00:12:45.229 }, 00:12:45.229 { 00:12:45.229 "name": "BaseBdev2", 00:12:45.229 "uuid": "604344c0-2a0d-5eae-9f42-76e076b0518b", 00:12:45.229 "is_configured": true, 00:12:45.229 "data_offset": 0, 00:12:45.229 "data_size": 65536 00:12:45.229 }, 00:12:45.229 { 00:12:45.229 "name": "BaseBdev3", 00:12:45.229 "uuid": "76cfc1cf-86e7-5227-960e-7f7120dcd369", 00:12:45.229 "is_configured": true, 00:12:45.229 "data_offset": 0, 00:12:45.229 "data_size": 65536 00:12:45.229 }, 00:12:45.229 { 00:12:45.229 "name": "BaseBdev4", 00:12:45.229 "uuid": "99cb9927-2eab-542e-9dda-40467fda27d6", 00:12:45.229 "is_configured": true, 00:12:45.229 "data_offset": 0, 00:12:45.229 "data_size": 65536 00:12:45.229 } 00:12:45.229 ] 00:12:45.229 }' 00:12:45.229 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.229 02:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.488 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.488 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.488 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.488 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.488 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.488 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.488 02:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.489 02:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.489 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.489 02:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.489 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.489 "name": "raid_bdev1", 00:12:45.489 "uuid": "b305bfb3-4d9c-493b-ad0b-bdd44f6abcc0", 00:12:45.489 "strip_size_kb": 0, 00:12:45.489 "state": "online", 00:12:45.489 "raid_level": "raid1", 00:12:45.489 "superblock": false, 00:12:45.489 "num_base_bdevs": 4, 00:12:45.489 "num_base_bdevs_discovered": 3, 00:12:45.489 "num_base_bdevs_operational": 3, 00:12:45.489 "base_bdevs_list": [ 00:12:45.489 { 00:12:45.489 "name": null, 00:12:45.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.489 "is_configured": false, 00:12:45.489 "data_offset": 0, 00:12:45.489 "data_size": 65536 00:12:45.489 }, 00:12:45.489 { 00:12:45.489 "name": "BaseBdev2", 00:12:45.489 "uuid": "604344c0-2a0d-5eae-9f42-76e076b0518b", 00:12:45.489 "is_configured": true, 00:12:45.489 "data_offset": 0, 00:12:45.489 "data_size": 65536 00:12:45.489 }, 00:12:45.489 { 00:12:45.489 "name": "BaseBdev3", 00:12:45.489 "uuid": "76cfc1cf-86e7-5227-960e-7f7120dcd369", 00:12:45.489 "is_configured": true, 00:12:45.489 "data_offset": 0, 00:12:45.489 "data_size": 65536 00:12:45.489 }, 00:12:45.489 { 00:12:45.489 "name": "BaseBdev4", 00:12:45.489 "uuid": "99cb9927-2eab-542e-9dda-40467fda27d6", 00:12:45.489 "is_configured": true, 00:12:45.489 "data_offset": 0, 00:12:45.489 "data_size": 65536 00:12:45.489 } 00:12:45.489 ] 00:12:45.489 }' 00:12:45.489 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.749 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.749 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.749 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.749 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:45.749 02:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.749 02:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.749 [2024-12-07 02:45:56.630609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.749 [2024-12-07 02:45:56.634186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:45.749 [2024-12-07 02:45:56.636108] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:45.749 02:45:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.749 02:45:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.691 "name": "raid_bdev1", 00:12:46.691 "uuid": "b305bfb3-4d9c-493b-ad0b-bdd44f6abcc0", 00:12:46.691 "strip_size_kb": 0, 00:12:46.691 "state": "online", 00:12:46.691 "raid_level": "raid1", 00:12:46.691 "superblock": false, 00:12:46.691 "num_base_bdevs": 4, 00:12:46.691 "num_base_bdevs_discovered": 4, 00:12:46.691 "num_base_bdevs_operational": 4, 00:12:46.691 "process": { 00:12:46.691 "type": "rebuild", 00:12:46.691 "target": "spare", 00:12:46.691 "progress": { 00:12:46.691 "blocks": 20480, 00:12:46.691 "percent": 31 00:12:46.691 } 00:12:46.691 }, 00:12:46.691 "base_bdevs_list": [ 00:12:46.691 { 00:12:46.691 "name": "spare", 00:12:46.691 "uuid": "f9ba38ac-d1bc-5f9f-8884-09a6e0c171b2", 00:12:46.691 "is_configured": true, 00:12:46.691 "data_offset": 0, 00:12:46.691 "data_size": 65536 00:12:46.691 }, 00:12:46.691 { 00:12:46.691 "name": "BaseBdev2", 00:12:46.691 "uuid": "604344c0-2a0d-5eae-9f42-76e076b0518b", 00:12:46.691 "is_configured": true, 00:12:46.691 "data_offset": 0, 00:12:46.691 "data_size": 65536 00:12:46.691 }, 00:12:46.691 { 00:12:46.691 "name": "BaseBdev3", 00:12:46.691 "uuid": "76cfc1cf-86e7-5227-960e-7f7120dcd369", 00:12:46.691 "is_configured": true, 00:12:46.691 "data_offset": 0, 00:12:46.691 "data_size": 65536 00:12:46.691 }, 00:12:46.691 { 00:12:46.691 "name": "BaseBdev4", 00:12:46.691 "uuid": "99cb9927-2eab-542e-9dda-40467fda27d6", 00:12:46.691 "is_configured": true, 00:12:46.691 "data_offset": 0, 00:12:46.691 "data_size": 65536 00:12:46.691 } 00:12:46.691 ] 00:12:46.691 }' 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.691 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.959 [2024-12-07 02:45:57.795728] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:46.959 [2024-12-07 02:45:57.841097] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09ca0 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.959 "name": "raid_bdev1", 00:12:46.959 "uuid": "b305bfb3-4d9c-493b-ad0b-bdd44f6abcc0", 00:12:46.959 "strip_size_kb": 0, 00:12:46.959 "state": "online", 00:12:46.959 "raid_level": "raid1", 00:12:46.959 "superblock": false, 00:12:46.959 "num_base_bdevs": 4, 00:12:46.959 "num_base_bdevs_discovered": 3, 00:12:46.959 "num_base_bdevs_operational": 3, 00:12:46.959 "process": { 00:12:46.959 "type": "rebuild", 00:12:46.959 "target": "spare", 00:12:46.959 "progress": { 00:12:46.959 "blocks": 24576, 00:12:46.959 "percent": 37 00:12:46.959 } 00:12:46.959 }, 00:12:46.959 "base_bdevs_list": [ 00:12:46.959 { 00:12:46.959 "name": "spare", 00:12:46.959 "uuid": "f9ba38ac-d1bc-5f9f-8884-09a6e0c171b2", 00:12:46.959 "is_configured": true, 00:12:46.959 "data_offset": 0, 00:12:46.959 "data_size": 65536 00:12:46.959 }, 00:12:46.959 { 00:12:46.959 "name": null, 00:12:46.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.959 "is_configured": false, 00:12:46.959 "data_offset": 0, 00:12:46.959 "data_size": 65536 00:12:46.959 }, 00:12:46.959 { 00:12:46.959 "name": "BaseBdev3", 00:12:46.959 "uuid": "76cfc1cf-86e7-5227-960e-7f7120dcd369", 00:12:46.959 "is_configured": true, 00:12:46.959 "data_offset": 0, 00:12:46.959 "data_size": 65536 00:12:46.959 }, 00:12:46.959 { 00:12:46.959 "name": "BaseBdev4", 00:12:46.959 "uuid": "99cb9927-2eab-542e-9dda-40467fda27d6", 00:12:46.959 "is_configured": true, 00:12:46.959 "data_offset": 0, 00:12:46.959 "data_size": 65536 00:12:46.959 } 00:12:46.959 ] 00:12:46.959 }' 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=370 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.959 02:45:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.959 02:45:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.959 "name": "raid_bdev1", 00:12:46.959 "uuid": "b305bfb3-4d9c-493b-ad0b-bdd44f6abcc0", 00:12:46.959 "strip_size_kb": 0, 00:12:46.959 "state": "online", 00:12:46.959 "raid_level": "raid1", 00:12:46.959 "superblock": false, 00:12:46.959 "num_base_bdevs": 4, 00:12:46.959 "num_base_bdevs_discovered": 3, 00:12:46.959 "num_base_bdevs_operational": 3, 00:12:46.959 "process": { 00:12:46.959 "type": "rebuild", 00:12:46.959 "target": "spare", 00:12:46.959 "progress": { 00:12:46.960 "blocks": 26624, 00:12:46.960 "percent": 40 00:12:46.960 } 00:12:46.960 }, 00:12:46.960 "base_bdevs_list": [ 00:12:46.960 { 00:12:46.960 "name": "spare", 00:12:46.960 "uuid": "f9ba38ac-d1bc-5f9f-8884-09a6e0c171b2", 00:12:46.960 "is_configured": true, 00:12:46.960 "data_offset": 0, 00:12:46.960 "data_size": 65536 00:12:46.960 }, 00:12:46.960 { 00:12:46.960 "name": null, 00:12:46.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.960 "is_configured": false, 00:12:46.960 "data_offset": 0, 00:12:46.960 "data_size": 65536 00:12:46.960 }, 00:12:46.960 { 00:12:46.960 "name": "BaseBdev3", 00:12:46.960 "uuid": "76cfc1cf-86e7-5227-960e-7f7120dcd369", 00:12:46.960 "is_configured": true, 00:12:46.960 "data_offset": 0, 00:12:46.960 "data_size": 65536 00:12:46.960 }, 00:12:46.960 { 00:12:46.960 "name": "BaseBdev4", 00:12:46.960 "uuid": "99cb9927-2eab-542e-9dda-40467fda27d6", 00:12:46.960 "is_configured": true, 00:12:46.960 "data_offset": 0, 00:12:46.960 "data_size": 65536 00:12:46.960 } 00:12:46.960 ] 00:12:46.960 }' 00:12:46.960 02:45:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.239 02:45:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.239 02:45:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.239 02:45:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.239 02:45:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.202 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.203 "name": "raid_bdev1", 00:12:48.203 "uuid": "b305bfb3-4d9c-493b-ad0b-bdd44f6abcc0", 00:12:48.203 "strip_size_kb": 0, 00:12:48.203 "state": "online", 00:12:48.203 "raid_level": "raid1", 00:12:48.203 "superblock": false, 00:12:48.203 "num_base_bdevs": 4, 00:12:48.203 "num_base_bdevs_discovered": 3, 00:12:48.203 "num_base_bdevs_operational": 3, 00:12:48.203 "process": { 00:12:48.203 "type": "rebuild", 00:12:48.203 "target": "spare", 00:12:48.203 "progress": { 00:12:48.203 "blocks": 49152, 00:12:48.203 "percent": 75 00:12:48.203 } 00:12:48.203 }, 00:12:48.203 "base_bdevs_list": [ 00:12:48.203 { 00:12:48.203 "name": "spare", 00:12:48.203 "uuid": "f9ba38ac-d1bc-5f9f-8884-09a6e0c171b2", 00:12:48.203 "is_configured": true, 00:12:48.203 "data_offset": 0, 00:12:48.203 "data_size": 65536 00:12:48.203 }, 00:12:48.203 { 00:12:48.203 "name": null, 00:12:48.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.203 "is_configured": false, 00:12:48.203 "data_offset": 0, 00:12:48.203 "data_size": 65536 00:12:48.203 }, 00:12:48.203 { 00:12:48.203 "name": "BaseBdev3", 00:12:48.203 "uuid": "76cfc1cf-86e7-5227-960e-7f7120dcd369", 00:12:48.203 "is_configured": true, 00:12:48.203 "data_offset": 0, 00:12:48.203 "data_size": 65536 00:12:48.203 }, 00:12:48.203 { 00:12:48.203 "name": "BaseBdev4", 00:12:48.203 "uuid": "99cb9927-2eab-542e-9dda-40467fda27d6", 00:12:48.203 "is_configured": true, 00:12:48.203 "data_offset": 0, 00:12:48.203 "data_size": 65536 00:12:48.203 } 00:12:48.203 ] 00:12:48.203 }' 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.203 02:45:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.774 [2024-12-07 02:45:59.848570] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:48.774 [2024-12-07 02:45:59.848740] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:48.774 [2024-12-07 02:45:59.848791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.344 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.344 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.344 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.344 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.344 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.344 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.344 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.344 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.344 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.344 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.344 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.344 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.344 "name": "raid_bdev1", 00:12:49.344 "uuid": "b305bfb3-4d9c-493b-ad0b-bdd44f6abcc0", 00:12:49.344 "strip_size_kb": 0, 00:12:49.344 "state": "online", 00:12:49.344 "raid_level": "raid1", 00:12:49.344 "superblock": false, 00:12:49.344 "num_base_bdevs": 4, 00:12:49.344 "num_base_bdevs_discovered": 3, 00:12:49.344 "num_base_bdevs_operational": 3, 00:12:49.344 "base_bdevs_list": [ 00:12:49.344 { 00:12:49.344 "name": "spare", 00:12:49.344 "uuid": "f9ba38ac-d1bc-5f9f-8884-09a6e0c171b2", 00:12:49.344 "is_configured": true, 00:12:49.344 "data_offset": 0, 00:12:49.344 "data_size": 65536 00:12:49.344 }, 00:12:49.344 { 00:12:49.344 "name": null, 00:12:49.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.344 "is_configured": false, 00:12:49.344 "data_offset": 0, 00:12:49.344 "data_size": 65536 00:12:49.344 }, 00:12:49.344 { 00:12:49.344 "name": "BaseBdev3", 00:12:49.344 "uuid": "76cfc1cf-86e7-5227-960e-7f7120dcd369", 00:12:49.345 "is_configured": true, 00:12:49.345 "data_offset": 0, 00:12:49.345 "data_size": 65536 00:12:49.345 }, 00:12:49.345 { 00:12:49.345 "name": "BaseBdev4", 00:12:49.345 "uuid": "99cb9927-2eab-542e-9dda-40467fda27d6", 00:12:49.345 "is_configured": true, 00:12:49.345 "data_offset": 0, 00:12:49.345 "data_size": 65536 00:12:49.345 } 00:12:49.345 ] 00:12:49.345 }' 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.345 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.605 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.605 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.605 "name": "raid_bdev1", 00:12:49.606 "uuid": "b305bfb3-4d9c-493b-ad0b-bdd44f6abcc0", 00:12:49.606 "strip_size_kb": 0, 00:12:49.606 "state": "online", 00:12:49.606 "raid_level": "raid1", 00:12:49.606 "superblock": false, 00:12:49.606 "num_base_bdevs": 4, 00:12:49.606 "num_base_bdevs_discovered": 3, 00:12:49.606 "num_base_bdevs_operational": 3, 00:12:49.606 "base_bdevs_list": [ 00:12:49.606 { 00:12:49.606 "name": "spare", 00:12:49.606 "uuid": "f9ba38ac-d1bc-5f9f-8884-09a6e0c171b2", 00:12:49.606 "is_configured": true, 00:12:49.606 "data_offset": 0, 00:12:49.606 "data_size": 65536 00:12:49.606 }, 00:12:49.606 { 00:12:49.606 "name": null, 00:12:49.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.606 "is_configured": false, 00:12:49.606 "data_offset": 0, 00:12:49.606 "data_size": 65536 00:12:49.606 }, 00:12:49.606 { 00:12:49.606 "name": "BaseBdev3", 00:12:49.606 "uuid": "76cfc1cf-86e7-5227-960e-7f7120dcd369", 00:12:49.606 "is_configured": true, 00:12:49.606 "data_offset": 0, 00:12:49.606 "data_size": 65536 00:12:49.606 }, 00:12:49.606 { 00:12:49.606 "name": "BaseBdev4", 00:12:49.606 "uuid": "99cb9927-2eab-542e-9dda-40467fda27d6", 00:12:49.606 "is_configured": true, 00:12:49.606 "data_offset": 0, 00:12:49.606 "data_size": 65536 00:12:49.606 } 00:12:49.606 ] 00:12:49.606 }' 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.606 "name": "raid_bdev1", 00:12:49.606 "uuid": "b305bfb3-4d9c-493b-ad0b-bdd44f6abcc0", 00:12:49.606 "strip_size_kb": 0, 00:12:49.606 "state": "online", 00:12:49.606 "raid_level": "raid1", 00:12:49.606 "superblock": false, 00:12:49.606 "num_base_bdevs": 4, 00:12:49.606 "num_base_bdevs_discovered": 3, 00:12:49.606 "num_base_bdevs_operational": 3, 00:12:49.606 "base_bdevs_list": [ 00:12:49.606 { 00:12:49.606 "name": "spare", 00:12:49.606 "uuid": "f9ba38ac-d1bc-5f9f-8884-09a6e0c171b2", 00:12:49.606 "is_configured": true, 00:12:49.606 "data_offset": 0, 00:12:49.606 "data_size": 65536 00:12:49.606 }, 00:12:49.606 { 00:12:49.606 "name": null, 00:12:49.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.606 "is_configured": false, 00:12:49.606 "data_offset": 0, 00:12:49.606 "data_size": 65536 00:12:49.606 }, 00:12:49.606 { 00:12:49.606 "name": "BaseBdev3", 00:12:49.606 "uuid": "76cfc1cf-86e7-5227-960e-7f7120dcd369", 00:12:49.606 "is_configured": true, 00:12:49.606 "data_offset": 0, 00:12:49.606 "data_size": 65536 00:12:49.606 }, 00:12:49.606 { 00:12:49.606 "name": "BaseBdev4", 00:12:49.606 "uuid": "99cb9927-2eab-542e-9dda-40467fda27d6", 00:12:49.606 "is_configured": true, 00:12:49.606 "data_offset": 0, 00:12:49.606 "data_size": 65536 00:12:49.606 } 00:12:49.606 ] 00:12:49.606 }' 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.606 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.176 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:50.176 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.176 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.176 [2024-12-07 02:46:00.974910] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.176 [2024-12-07 02:46:00.974993] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.176 [2024-12-07 02:46:00.975175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.176 [2024-12-07 02:46:00.975306] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.176 [2024-12-07 02:46:00.975356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:12:50.176 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.176 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.176 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.176 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.176 02:46:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:50.176 02:46:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:50.176 /dev/nbd0 00:12:50.176 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.436 1+0 records in 00:12:50.436 1+0 records out 00:12:50.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580053 s, 7.1 MB/s 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:50.436 /dev/nbd1 00:12:50.436 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.696 1+0 records in 00:12:50.696 1+0 records out 00:12:50.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436772 s, 9.4 MB/s 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.696 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:50.956 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:50.956 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:50.956 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:50.956 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.956 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.956 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:50.956 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:50.956 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.956 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.956 02:46:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 88446 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 88446 ']' 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 88446 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88446 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88446' 00:12:51.216 killing process with pid 88446 00:12:51.216 Received shutdown signal, test time was about 60.000000 seconds 00:12:51.216 00:12:51.216 Latency(us) 00:12:51.216 [2024-12-07T02:46:02.294Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.216 [2024-12-07T02:46:02.294Z] =================================================================================================================== 00:12:51.216 [2024-12-07T02:46:02.294Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 88446 00:12:51.216 [2024-12-07 02:46:02.087887] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.216 02:46:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 88446 00:12:51.216 [2024-12-07 02:46:02.184506] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:51.786 00:12:51.786 real 0m15.412s 00:12:51.786 user 0m17.450s 00:12:51.786 sys 0m3.016s 00:12:51.786 ************************************ 00:12:51.786 END TEST raid_rebuild_test 00:12:51.786 ************************************ 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.786 02:46:02 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:12:51.786 02:46:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:51.786 02:46:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.786 02:46:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:51.786 ************************************ 00:12:51.786 START TEST raid_rebuild_test_sb 00:12:51.786 ************************************ 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:51.786 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88874 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88874 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88874 ']' 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.787 02:46:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.787 [2024-12-07 02:46:02.736688] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:12:51.787 [2024-12-07 02:46:02.736910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:51.787 Zero copy mechanism will not be used. 00:12:51.787 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88874 ] 00:12:52.046 [2024-12-07 02:46:02.900559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.046 [2024-12-07 02:46:02.973971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.046 [2024-12-07 02:46:03.052572] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.046 [2024-12-07 02:46:03.052686] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.616 BaseBdev1_malloc 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.616 [2024-12-07 02:46:03.572547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:52.616 [2024-12-07 02:46:03.572652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.616 [2024-12-07 02:46:03.572692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:52.616 [2024-12-07 02:46:03.572707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.616 [2024-12-07 02:46:03.575170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.616 [2024-12-07 02:46:03.575204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:52.616 BaseBdev1 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.616 BaseBdev2_malloc 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.616 [2024-12-07 02:46:03.625252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:52.616 [2024-12-07 02:46:03.625482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.616 [2024-12-07 02:46:03.625540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:52.616 [2024-12-07 02:46:03.625564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.616 [2024-12-07 02:46:03.630732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.616 [2024-12-07 02:46:03.630802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:52.616 BaseBdev2 00:12:52.616 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.617 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:52.617 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:52.617 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.617 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.617 BaseBdev3_malloc 00:12:52.617 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.617 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:52.617 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.617 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.617 [2024-12-07 02:46:03.663217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:52.617 [2024-12-07 02:46:03.663346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.617 [2024-12-07 02:46:03.663394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:52.617 [2024-12-07 02:46:03.663422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.617 [2024-12-07 02:46:03.665899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.617 [2024-12-07 02:46:03.665968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:52.617 BaseBdev3 00:12:52.617 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.617 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:52.617 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:52.617 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.617 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.877 BaseBdev4_malloc 00:12:52.877 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.877 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 [2024-12-07 02:46:03.698322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:52.878 [2024-12-07 02:46:03.698377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.878 [2024-12-07 02:46:03.698404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:52.878 [2024-12-07 02:46:03.698413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.878 [2024-12-07 02:46:03.700765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.878 [2024-12-07 02:46:03.700856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:52.878 BaseBdev4 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 spare_malloc 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 spare_delay 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 [2024-12-07 02:46:03.745482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:52.878 [2024-12-07 02:46:03.745592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.878 [2024-12-07 02:46:03.745620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:52.878 [2024-12-07 02:46:03.745630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.878 [2024-12-07 02:46:03.747959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.878 [2024-12-07 02:46:03.747996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:52.878 spare 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 [2024-12-07 02:46:03.757557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.878 [2024-12-07 02:46:03.759608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.878 [2024-12-07 02:46:03.759737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:52.878 [2024-12-07 02:46:03.759786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:52.878 [2024-12-07 02:46:03.759955] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:12:52.878 [2024-12-07 02:46:03.759966] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:52.878 [2024-12-07 02:46:03.760206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:52.878 [2024-12-07 02:46:03.760365] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:12:52.878 [2024-12-07 02:46:03.760384] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:12:52.878 [2024-12-07 02:46:03.760503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.878 "name": "raid_bdev1", 00:12:52.878 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:12:52.878 "strip_size_kb": 0, 00:12:52.878 "state": "online", 00:12:52.878 "raid_level": "raid1", 00:12:52.878 "superblock": true, 00:12:52.878 "num_base_bdevs": 4, 00:12:52.878 "num_base_bdevs_discovered": 4, 00:12:52.878 "num_base_bdevs_operational": 4, 00:12:52.878 "base_bdevs_list": [ 00:12:52.878 { 00:12:52.878 "name": "BaseBdev1", 00:12:52.878 "uuid": "ecb113ab-5565-5b2e-856e-d4784f6cc3fa", 00:12:52.878 "is_configured": true, 00:12:52.878 "data_offset": 2048, 00:12:52.878 "data_size": 63488 00:12:52.878 }, 00:12:52.878 { 00:12:52.878 "name": "BaseBdev2", 00:12:52.878 "uuid": "a9c2eeea-b143-55e2-9f89-38dc1f3e1ab0", 00:12:52.878 "is_configured": true, 00:12:52.878 "data_offset": 2048, 00:12:52.878 "data_size": 63488 00:12:52.878 }, 00:12:52.878 { 00:12:52.878 "name": "BaseBdev3", 00:12:52.878 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:12:52.878 "is_configured": true, 00:12:52.878 "data_offset": 2048, 00:12:52.878 "data_size": 63488 00:12:52.878 }, 00:12:52.878 { 00:12:52.878 "name": "BaseBdev4", 00:12:52.878 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:12:52.878 "is_configured": true, 00:12:52.878 "data_offset": 2048, 00:12:52.878 "data_size": 63488 00:12:52.878 } 00:12:52.878 ] 00:12:52.878 }' 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.878 02:46:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.449 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:53.449 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:53.449 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.449 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.449 [2024-12-07 02:46:04.232973] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.449 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:53.450 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:53.450 [2024-12-07 02:46:04.504322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:53.450 /dev/nbd0 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.710 1+0 records in 00:12:53.710 1+0 records out 00:12:53.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538683 s, 7.6 MB/s 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:53.710 02:46:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:58.980 63488+0 records in 00:12:58.981 63488+0 records out 00:12:58.981 32505856 bytes (33 MB, 31 MiB) copied, 4.82022 s, 6.7 MB/s 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.981 [2024-12-07 02:46:09.610683] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.981 [2024-12-07 02:46:09.626739] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.981 "name": "raid_bdev1", 00:12:58.981 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:12:58.981 "strip_size_kb": 0, 00:12:58.981 "state": "online", 00:12:58.981 "raid_level": "raid1", 00:12:58.981 "superblock": true, 00:12:58.981 "num_base_bdevs": 4, 00:12:58.981 "num_base_bdevs_discovered": 3, 00:12:58.981 "num_base_bdevs_operational": 3, 00:12:58.981 "base_bdevs_list": [ 00:12:58.981 { 00:12:58.981 "name": null, 00:12:58.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.981 "is_configured": false, 00:12:58.981 "data_offset": 0, 00:12:58.981 "data_size": 63488 00:12:58.981 }, 00:12:58.981 { 00:12:58.981 "name": "BaseBdev2", 00:12:58.981 "uuid": "a9c2eeea-b143-55e2-9f89-38dc1f3e1ab0", 00:12:58.981 "is_configured": true, 00:12:58.981 "data_offset": 2048, 00:12:58.981 "data_size": 63488 00:12:58.981 }, 00:12:58.981 { 00:12:58.981 "name": "BaseBdev3", 00:12:58.981 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:12:58.981 "is_configured": true, 00:12:58.981 "data_offset": 2048, 00:12:58.981 "data_size": 63488 00:12:58.981 }, 00:12:58.981 { 00:12:58.981 "name": "BaseBdev4", 00:12:58.981 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:12:58.981 "is_configured": true, 00:12:58.981 "data_offset": 2048, 00:12:58.981 "data_size": 63488 00:12:58.981 } 00:12:58.981 ] 00:12:58.981 }' 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.981 02:46:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.240 02:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.240 02:46:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.240 02:46:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.240 [2024-12-07 02:46:10.094028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.240 [2024-12-07 02:46:10.100154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:59.240 02:46:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.240 02:46:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:59.240 [2024-12-07 02:46:10.102440] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.176 "name": "raid_bdev1", 00:13:00.176 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:00.176 "strip_size_kb": 0, 00:13:00.176 "state": "online", 00:13:00.176 "raid_level": "raid1", 00:13:00.176 "superblock": true, 00:13:00.176 "num_base_bdevs": 4, 00:13:00.176 "num_base_bdevs_discovered": 4, 00:13:00.176 "num_base_bdevs_operational": 4, 00:13:00.176 "process": { 00:13:00.176 "type": "rebuild", 00:13:00.176 "target": "spare", 00:13:00.176 "progress": { 00:13:00.176 "blocks": 20480, 00:13:00.176 "percent": 32 00:13:00.176 } 00:13:00.176 }, 00:13:00.176 "base_bdevs_list": [ 00:13:00.176 { 00:13:00.176 "name": "spare", 00:13:00.176 "uuid": "2c21acf9-5a2c-567d-bc6f-276acfeb642e", 00:13:00.176 "is_configured": true, 00:13:00.176 "data_offset": 2048, 00:13:00.176 "data_size": 63488 00:13:00.176 }, 00:13:00.176 { 00:13:00.176 "name": "BaseBdev2", 00:13:00.176 "uuid": "a9c2eeea-b143-55e2-9f89-38dc1f3e1ab0", 00:13:00.176 "is_configured": true, 00:13:00.176 "data_offset": 2048, 00:13:00.176 "data_size": 63488 00:13:00.176 }, 00:13:00.176 { 00:13:00.176 "name": "BaseBdev3", 00:13:00.176 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:00.176 "is_configured": true, 00:13:00.176 "data_offset": 2048, 00:13:00.176 "data_size": 63488 00:13:00.176 }, 00:13:00.176 { 00:13:00.176 "name": "BaseBdev4", 00:13:00.176 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:00.176 "is_configured": true, 00:13:00.176 "data_offset": 2048, 00:13:00.176 "data_size": 63488 00:13:00.176 } 00:13:00.176 ] 00:13:00.176 }' 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.176 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.435 [2024-12-07 02:46:11.258621] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.435 [2024-12-07 02:46:11.311357] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:00.435 [2024-12-07 02:46:11.311422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.435 [2024-12-07 02:46:11.311443] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.435 [2024-12-07 02:46:11.311452] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.435 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.435 "name": "raid_bdev1", 00:13:00.435 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:00.435 "strip_size_kb": 0, 00:13:00.435 "state": "online", 00:13:00.435 "raid_level": "raid1", 00:13:00.435 "superblock": true, 00:13:00.435 "num_base_bdevs": 4, 00:13:00.436 "num_base_bdevs_discovered": 3, 00:13:00.436 "num_base_bdevs_operational": 3, 00:13:00.436 "base_bdevs_list": [ 00:13:00.436 { 00:13:00.436 "name": null, 00:13:00.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.436 "is_configured": false, 00:13:00.436 "data_offset": 0, 00:13:00.436 "data_size": 63488 00:13:00.436 }, 00:13:00.436 { 00:13:00.436 "name": "BaseBdev2", 00:13:00.436 "uuid": "a9c2eeea-b143-55e2-9f89-38dc1f3e1ab0", 00:13:00.436 "is_configured": true, 00:13:00.436 "data_offset": 2048, 00:13:00.436 "data_size": 63488 00:13:00.436 }, 00:13:00.436 { 00:13:00.436 "name": "BaseBdev3", 00:13:00.436 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:00.436 "is_configured": true, 00:13:00.436 "data_offset": 2048, 00:13:00.436 "data_size": 63488 00:13:00.436 }, 00:13:00.436 { 00:13:00.436 "name": "BaseBdev4", 00:13:00.436 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:00.436 "is_configured": true, 00:13:00.436 "data_offset": 2048, 00:13:00.436 "data_size": 63488 00:13:00.436 } 00:13:00.436 ] 00:13:00.436 }' 00:13:00.436 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.436 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.694 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:00.694 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.694 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:00.694 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:00.694 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.694 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.694 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.694 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.694 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.953 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.953 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.953 "name": "raid_bdev1", 00:13:00.953 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:00.953 "strip_size_kb": 0, 00:13:00.953 "state": "online", 00:13:00.953 "raid_level": "raid1", 00:13:00.953 "superblock": true, 00:13:00.953 "num_base_bdevs": 4, 00:13:00.953 "num_base_bdevs_discovered": 3, 00:13:00.953 "num_base_bdevs_operational": 3, 00:13:00.953 "base_bdevs_list": [ 00:13:00.953 { 00:13:00.953 "name": null, 00:13:00.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.953 "is_configured": false, 00:13:00.953 "data_offset": 0, 00:13:00.953 "data_size": 63488 00:13:00.953 }, 00:13:00.953 { 00:13:00.953 "name": "BaseBdev2", 00:13:00.953 "uuid": "a9c2eeea-b143-55e2-9f89-38dc1f3e1ab0", 00:13:00.953 "is_configured": true, 00:13:00.953 "data_offset": 2048, 00:13:00.953 "data_size": 63488 00:13:00.953 }, 00:13:00.953 { 00:13:00.953 "name": "BaseBdev3", 00:13:00.953 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:00.953 "is_configured": true, 00:13:00.953 "data_offset": 2048, 00:13:00.953 "data_size": 63488 00:13:00.953 }, 00:13:00.953 { 00:13:00.953 "name": "BaseBdev4", 00:13:00.953 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:00.953 "is_configured": true, 00:13:00.953 "data_offset": 2048, 00:13:00.953 "data_size": 63488 00:13:00.953 } 00:13:00.953 ] 00:13:00.953 }' 00:13:00.953 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.953 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:00.953 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.953 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.953 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:00.953 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.953 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.953 [2024-12-07 02:46:11.909883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:00.953 [2024-12-07 02:46:11.915797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:00.953 02:46:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.953 02:46:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:00.953 [2024-12-07 02:46:11.918018] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:01.890 02:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.890 02:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.890 02:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.890 02:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.890 02:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.890 02:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.890 02:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.890 02:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.890 02:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.890 02:46:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.149 02:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.149 "name": "raid_bdev1", 00:13:02.149 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:02.149 "strip_size_kb": 0, 00:13:02.149 "state": "online", 00:13:02.149 "raid_level": "raid1", 00:13:02.149 "superblock": true, 00:13:02.149 "num_base_bdevs": 4, 00:13:02.149 "num_base_bdevs_discovered": 4, 00:13:02.150 "num_base_bdevs_operational": 4, 00:13:02.150 "process": { 00:13:02.150 "type": "rebuild", 00:13:02.150 "target": "spare", 00:13:02.150 "progress": { 00:13:02.150 "blocks": 20480, 00:13:02.150 "percent": 32 00:13:02.150 } 00:13:02.150 }, 00:13:02.150 "base_bdevs_list": [ 00:13:02.150 { 00:13:02.150 "name": "spare", 00:13:02.150 "uuid": "2c21acf9-5a2c-567d-bc6f-276acfeb642e", 00:13:02.150 "is_configured": true, 00:13:02.150 "data_offset": 2048, 00:13:02.150 "data_size": 63488 00:13:02.150 }, 00:13:02.150 { 00:13:02.150 "name": "BaseBdev2", 00:13:02.150 "uuid": "a9c2eeea-b143-55e2-9f89-38dc1f3e1ab0", 00:13:02.150 "is_configured": true, 00:13:02.150 "data_offset": 2048, 00:13:02.150 "data_size": 63488 00:13:02.150 }, 00:13:02.150 { 00:13:02.150 "name": "BaseBdev3", 00:13:02.150 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:02.150 "is_configured": true, 00:13:02.150 "data_offset": 2048, 00:13:02.150 "data_size": 63488 00:13:02.150 }, 00:13:02.150 { 00:13:02.150 "name": "BaseBdev4", 00:13:02.150 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:02.150 "is_configured": true, 00:13:02.150 "data_offset": 2048, 00:13:02.150 "data_size": 63488 00:13:02.150 } 00:13:02.150 ] 00:13:02.150 }' 00:13:02.150 02:46:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.150 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.150 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.150 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.150 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:02.150 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:02.150 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:02.150 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:02.150 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:02.150 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:02.150 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:02.150 02:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.150 02:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.150 [2024-12-07 02:46:13.058319] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:02.443 [2024-12-07 02:46:13.226030] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3430 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.443 "name": "raid_bdev1", 00:13:02.443 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:02.443 "strip_size_kb": 0, 00:13:02.443 "state": "online", 00:13:02.443 "raid_level": "raid1", 00:13:02.443 "superblock": true, 00:13:02.443 "num_base_bdevs": 4, 00:13:02.443 "num_base_bdevs_discovered": 3, 00:13:02.443 "num_base_bdevs_operational": 3, 00:13:02.443 "process": { 00:13:02.443 "type": "rebuild", 00:13:02.443 "target": "spare", 00:13:02.443 "progress": { 00:13:02.443 "blocks": 24576, 00:13:02.443 "percent": 38 00:13:02.443 } 00:13:02.443 }, 00:13:02.443 "base_bdevs_list": [ 00:13:02.443 { 00:13:02.443 "name": "spare", 00:13:02.443 "uuid": "2c21acf9-5a2c-567d-bc6f-276acfeb642e", 00:13:02.443 "is_configured": true, 00:13:02.443 "data_offset": 2048, 00:13:02.443 "data_size": 63488 00:13:02.443 }, 00:13:02.443 { 00:13:02.443 "name": null, 00:13:02.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.443 "is_configured": false, 00:13:02.443 "data_offset": 0, 00:13:02.443 "data_size": 63488 00:13:02.443 }, 00:13:02.443 { 00:13:02.443 "name": "BaseBdev3", 00:13:02.443 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:02.443 "is_configured": true, 00:13:02.443 "data_offset": 2048, 00:13:02.443 "data_size": 63488 00:13:02.443 }, 00:13:02.443 { 00:13:02.443 "name": "BaseBdev4", 00:13:02.443 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:02.443 "is_configured": true, 00:13:02.443 "data_offset": 2048, 00:13:02.443 "data_size": 63488 00:13:02.443 } 00:13:02.443 ] 00:13:02.443 }' 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=386 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.443 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.444 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.444 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.444 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.444 02:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.444 02:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.444 02:46:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.444 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.444 "name": "raid_bdev1", 00:13:02.444 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:02.444 "strip_size_kb": 0, 00:13:02.444 "state": "online", 00:13:02.444 "raid_level": "raid1", 00:13:02.444 "superblock": true, 00:13:02.444 "num_base_bdevs": 4, 00:13:02.444 "num_base_bdevs_discovered": 3, 00:13:02.444 "num_base_bdevs_operational": 3, 00:13:02.444 "process": { 00:13:02.444 "type": "rebuild", 00:13:02.444 "target": "spare", 00:13:02.444 "progress": { 00:13:02.444 "blocks": 26624, 00:13:02.444 "percent": 41 00:13:02.444 } 00:13:02.444 }, 00:13:02.444 "base_bdevs_list": [ 00:13:02.444 { 00:13:02.444 "name": "spare", 00:13:02.444 "uuid": "2c21acf9-5a2c-567d-bc6f-276acfeb642e", 00:13:02.444 "is_configured": true, 00:13:02.444 "data_offset": 2048, 00:13:02.444 "data_size": 63488 00:13:02.444 }, 00:13:02.444 { 00:13:02.444 "name": null, 00:13:02.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.444 "is_configured": false, 00:13:02.444 "data_offset": 0, 00:13:02.444 "data_size": 63488 00:13:02.444 }, 00:13:02.444 { 00:13:02.444 "name": "BaseBdev3", 00:13:02.444 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:02.444 "is_configured": true, 00:13:02.444 "data_offset": 2048, 00:13:02.444 "data_size": 63488 00:13:02.444 }, 00:13:02.444 { 00:13:02.444 "name": "BaseBdev4", 00:13:02.444 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:02.444 "is_configured": true, 00:13:02.444 "data_offset": 2048, 00:13:02.444 "data_size": 63488 00:13:02.444 } 00:13:02.444 ] 00:13:02.444 }' 00:13:02.444 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.444 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.444 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.719 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.719 02:46:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.656 "name": "raid_bdev1", 00:13:03.656 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:03.656 "strip_size_kb": 0, 00:13:03.656 "state": "online", 00:13:03.656 "raid_level": "raid1", 00:13:03.656 "superblock": true, 00:13:03.656 "num_base_bdevs": 4, 00:13:03.656 "num_base_bdevs_discovered": 3, 00:13:03.656 "num_base_bdevs_operational": 3, 00:13:03.656 "process": { 00:13:03.656 "type": "rebuild", 00:13:03.656 "target": "spare", 00:13:03.656 "progress": { 00:13:03.656 "blocks": 51200, 00:13:03.656 "percent": 80 00:13:03.656 } 00:13:03.656 }, 00:13:03.656 "base_bdevs_list": [ 00:13:03.656 { 00:13:03.656 "name": "spare", 00:13:03.656 "uuid": "2c21acf9-5a2c-567d-bc6f-276acfeb642e", 00:13:03.656 "is_configured": true, 00:13:03.656 "data_offset": 2048, 00:13:03.656 "data_size": 63488 00:13:03.656 }, 00:13:03.656 { 00:13:03.656 "name": null, 00:13:03.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.656 "is_configured": false, 00:13:03.656 "data_offset": 0, 00:13:03.656 "data_size": 63488 00:13:03.656 }, 00:13:03.656 { 00:13:03.656 "name": "BaseBdev3", 00:13:03.656 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:03.656 "is_configured": true, 00:13:03.656 "data_offset": 2048, 00:13:03.656 "data_size": 63488 00:13:03.656 }, 00:13:03.656 { 00:13:03.656 "name": "BaseBdev4", 00:13:03.656 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:03.656 "is_configured": true, 00:13:03.656 "data_offset": 2048, 00:13:03.656 "data_size": 63488 00:13:03.656 } 00:13:03.656 ] 00:13:03.656 }' 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.656 02:46:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.224 [2024-12-07 02:46:15.138959] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:04.224 [2024-12-07 02:46:15.139095] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:04.224 [2024-12-07 02:46:15.139256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.790 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:04.790 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:04.790 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.790 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:04.790 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:04.790 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.790 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.790 02:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.790 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.790 02:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.790 02:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.790 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.790 "name": "raid_bdev1", 00:13:04.790 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:04.790 "strip_size_kb": 0, 00:13:04.790 "state": "online", 00:13:04.790 "raid_level": "raid1", 00:13:04.790 "superblock": true, 00:13:04.790 "num_base_bdevs": 4, 00:13:04.790 "num_base_bdevs_discovered": 3, 00:13:04.790 "num_base_bdevs_operational": 3, 00:13:04.790 "base_bdevs_list": [ 00:13:04.790 { 00:13:04.790 "name": "spare", 00:13:04.790 "uuid": "2c21acf9-5a2c-567d-bc6f-276acfeb642e", 00:13:04.790 "is_configured": true, 00:13:04.790 "data_offset": 2048, 00:13:04.790 "data_size": 63488 00:13:04.790 }, 00:13:04.790 { 00:13:04.790 "name": null, 00:13:04.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.791 "is_configured": false, 00:13:04.791 "data_offset": 0, 00:13:04.791 "data_size": 63488 00:13:04.791 }, 00:13:04.791 { 00:13:04.791 "name": "BaseBdev3", 00:13:04.791 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:04.791 "is_configured": true, 00:13:04.791 "data_offset": 2048, 00:13:04.791 "data_size": 63488 00:13:04.791 }, 00:13:04.791 { 00:13:04.791 "name": "BaseBdev4", 00:13:04.791 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:04.791 "is_configured": true, 00:13:04.791 "data_offset": 2048, 00:13:04.791 "data_size": 63488 00:13:04.791 } 00:13:04.791 ] 00:13:04.791 }' 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.791 02:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.050 "name": "raid_bdev1", 00:13:05.050 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:05.050 "strip_size_kb": 0, 00:13:05.050 "state": "online", 00:13:05.050 "raid_level": "raid1", 00:13:05.050 "superblock": true, 00:13:05.050 "num_base_bdevs": 4, 00:13:05.050 "num_base_bdevs_discovered": 3, 00:13:05.050 "num_base_bdevs_operational": 3, 00:13:05.050 "base_bdevs_list": [ 00:13:05.050 { 00:13:05.050 "name": "spare", 00:13:05.050 "uuid": "2c21acf9-5a2c-567d-bc6f-276acfeb642e", 00:13:05.050 "is_configured": true, 00:13:05.050 "data_offset": 2048, 00:13:05.050 "data_size": 63488 00:13:05.050 }, 00:13:05.050 { 00:13:05.050 "name": null, 00:13:05.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.050 "is_configured": false, 00:13:05.050 "data_offset": 0, 00:13:05.050 "data_size": 63488 00:13:05.050 }, 00:13:05.050 { 00:13:05.050 "name": "BaseBdev3", 00:13:05.050 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:05.050 "is_configured": true, 00:13:05.050 "data_offset": 2048, 00:13:05.050 "data_size": 63488 00:13:05.050 }, 00:13:05.050 { 00:13:05.050 "name": "BaseBdev4", 00:13:05.050 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:05.050 "is_configured": true, 00:13:05.050 "data_offset": 2048, 00:13:05.050 "data_size": 63488 00:13:05.050 } 00:13:05.050 ] 00:13:05.050 }' 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.050 02:46:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.050 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.050 "name": "raid_bdev1", 00:13:05.050 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:05.050 "strip_size_kb": 0, 00:13:05.050 "state": "online", 00:13:05.050 "raid_level": "raid1", 00:13:05.050 "superblock": true, 00:13:05.050 "num_base_bdevs": 4, 00:13:05.050 "num_base_bdevs_discovered": 3, 00:13:05.050 "num_base_bdevs_operational": 3, 00:13:05.050 "base_bdevs_list": [ 00:13:05.050 { 00:13:05.050 "name": "spare", 00:13:05.050 "uuid": "2c21acf9-5a2c-567d-bc6f-276acfeb642e", 00:13:05.050 "is_configured": true, 00:13:05.050 "data_offset": 2048, 00:13:05.050 "data_size": 63488 00:13:05.050 }, 00:13:05.050 { 00:13:05.050 "name": null, 00:13:05.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.050 "is_configured": false, 00:13:05.050 "data_offset": 0, 00:13:05.050 "data_size": 63488 00:13:05.050 }, 00:13:05.050 { 00:13:05.050 "name": "BaseBdev3", 00:13:05.050 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:05.050 "is_configured": true, 00:13:05.050 "data_offset": 2048, 00:13:05.050 "data_size": 63488 00:13:05.050 }, 00:13:05.050 { 00:13:05.050 "name": "BaseBdev4", 00:13:05.050 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:05.050 "is_configured": true, 00:13:05.050 "data_offset": 2048, 00:13:05.050 "data_size": 63488 00:13:05.050 } 00:13:05.050 ] 00:13:05.050 }' 00:13:05.050 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.050 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.309 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.309 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.309 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.309 [2024-12-07 02:46:16.371420] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.309 [2024-12-07 02:46:16.371497] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.309 [2024-12-07 02:46:16.371648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.309 [2024-12-07 02:46:16.371769] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.309 [2024-12-07 02:46:16.371854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:05.309 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.310 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.310 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:05.310 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.310 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.569 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.569 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:05.569 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:05.569 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:05.569 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:05.569 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.569 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:05.569 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.569 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:05.569 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.570 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:05.570 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.570 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:05.570 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:05.570 /dev/nbd0 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.830 1+0 records in 00:13:05.830 1+0 records out 00:13:05.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568041 s, 7.2 MB/s 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:05.830 /dev/nbd1 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.830 1+0 records in 00:13:05.830 1+0 records out 00:13:05.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342401 s, 12.0 MB/s 00:13:05.830 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.090 02:46:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:06.090 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.348 [2024-12-07 02:46:17.415674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:06.348 [2024-12-07 02:46:17.415749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.348 [2024-12-07 02:46:17.415775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:06.348 [2024-12-07 02:46:17.415790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.348 [2024-12-07 02:46:17.418325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.348 [2024-12-07 02:46:17.418363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:06.348 [2024-12-07 02:46:17.418432] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:06.348 [2024-12-07 02:46:17.418483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.348 [2024-12-07 02:46:17.418650] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.348 [2024-12-07 02:46:17.418748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:06.348 spare 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.348 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.606 [2024-12-07 02:46:17.518634] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:06.606 [2024-12-07 02:46:17.518662] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:06.606 [2024-12-07 02:46:17.518977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:06.606 [2024-12-07 02:46:17.519128] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:06.606 [2024-12-07 02:46:17.519138] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:06.606 [2024-12-07 02:46:17.519266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.606 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.606 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:06.606 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.606 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.606 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.606 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.606 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.606 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.606 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.606 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.606 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.606 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.606 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.607 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.607 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.607 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.607 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.607 "name": "raid_bdev1", 00:13:06.607 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:06.607 "strip_size_kb": 0, 00:13:06.607 "state": "online", 00:13:06.607 "raid_level": "raid1", 00:13:06.607 "superblock": true, 00:13:06.607 "num_base_bdevs": 4, 00:13:06.607 "num_base_bdevs_discovered": 3, 00:13:06.607 "num_base_bdevs_operational": 3, 00:13:06.607 "base_bdevs_list": [ 00:13:06.607 { 00:13:06.607 "name": "spare", 00:13:06.607 "uuid": "2c21acf9-5a2c-567d-bc6f-276acfeb642e", 00:13:06.607 "is_configured": true, 00:13:06.607 "data_offset": 2048, 00:13:06.607 "data_size": 63488 00:13:06.607 }, 00:13:06.607 { 00:13:06.607 "name": null, 00:13:06.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.607 "is_configured": false, 00:13:06.607 "data_offset": 2048, 00:13:06.607 "data_size": 63488 00:13:06.607 }, 00:13:06.607 { 00:13:06.607 "name": "BaseBdev3", 00:13:06.607 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:06.607 "is_configured": true, 00:13:06.607 "data_offset": 2048, 00:13:06.607 "data_size": 63488 00:13:06.607 }, 00:13:06.607 { 00:13:06.607 "name": "BaseBdev4", 00:13:06.607 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:06.607 "is_configured": true, 00:13:06.607 "data_offset": 2048, 00:13:06.607 "data_size": 63488 00:13:06.607 } 00:13:06.607 ] 00:13:06.607 }' 00:13:06.607 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.607 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.865 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.865 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.865 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.865 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.865 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.124 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.124 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.124 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.125 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.125 02:46:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.125 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.125 "name": "raid_bdev1", 00:13:07.125 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:07.125 "strip_size_kb": 0, 00:13:07.125 "state": "online", 00:13:07.125 "raid_level": "raid1", 00:13:07.125 "superblock": true, 00:13:07.125 "num_base_bdevs": 4, 00:13:07.125 "num_base_bdevs_discovered": 3, 00:13:07.125 "num_base_bdevs_operational": 3, 00:13:07.125 "base_bdevs_list": [ 00:13:07.125 { 00:13:07.125 "name": "spare", 00:13:07.125 "uuid": "2c21acf9-5a2c-567d-bc6f-276acfeb642e", 00:13:07.125 "is_configured": true, 00:13:07.125 "data_offset": 2048, 00:13:07.125 "data_size": 63488 00:13:07.125 }, 00:13:07.125 { 00:13:07.125 "name": null, 00:13:07.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.125 "is_configured": false, 00:13:07.125 "data_offset": 2048, 00:13:07.125 "data_size": 63488 00:13:07.125 }, 00:13:07.125 { 00:13:07.125 "name": "BaseBdev3", 00:13:07.125 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:07.125 "is_configured": true, 00:13:07.125 "data_offset": 2048, 00:13:07.125 "data_size": 63488 00:13:07.125 }, 00:13:07.125 { 00:13:07.125 "name": "BaseBdev4", 00:13:07.125 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:07.125 "is_configured": true, 00:13:07.125 "data_offset": 2048, 00:13:07.125 "data_size": 63488 00:13:07.125 } 00:13:07.125 ] 00:13:07.125 }' 00:13:07.125 02:46:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.125 [2024-12-07 02:46:18.122454] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.125 "name": "raid_bdev1", 00:13:07.125 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:07.125 "strip_size_kb": 0, 00:13:07.125 "state": "online", 00:13:07.125 "raid_level": "raid1", 00:13:07.125 "superblock": true, 00:13:07.125 "num_base_bdevs": 4, 00:13:07.125 "num_base_bdevs_discovered": 2, 00:13:07.125 "num_base_bdevs_operational": 2, 00:13:07.125 "base_bdevs_list": [ 00:13:07.125 { 00:13:07.125 "name": null, 00:13:07.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.125 "is_configured": false, 00:13:07.125 "data_offset": 0, 00:13:07.125 "data_size": 63488 00:13:07.125 }, 00:13:07.125 { 00:13:07.125 "name": null, 00:13:07.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.125 "is_configured": false, 00:13:07.125 "data_offset": 2048, 00:13:07.125 "data_size": 63488 00:13:07.125 }, 00:13:07.125 { 00:13:07.125 "name": "BaseBdev3", 00:13:07.125 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:07.125 "is_configured": true, 00:13:07.125 "data_offset": 2048, 00:13:07.125 "data_size": 63488 00:13:07.125 }, 00:13:07.125 { 00:13:07.125 "name": "BaseBdev4", 00:13:07.125 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:07.125 "is_configured": true, 00:13:07.125 "data_offset": 2048, 00:13:07.125 "data_size": 63488 00:13:07.125 } 00:13:07.125 ] 00:13:07.125 }' 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.125 02:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.694 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:07.694 02:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.694 02:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.694 [2024-12-07 02:46:18.569705] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.694 [2024-12-07 02:46:18.569932] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:07.694 [2024-12-07 02:46:18.569999] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:07.694 [2024-12-07 02:46:18.570057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.694 [2024-12-07 02:46:18.575781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:07.694 02:46:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.694 02:46:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:07.694 [2024-12-07 02:46:18.577995] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.633 "name": "raid_bdev1", 00:13:08.633 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:08.633 "strip_size_kb": 0, 00:13:08.633 "state": "online", 00:13:08.633 "raid_level": "raid1", 00:13:08.633 "superblock": true, 00:13:08.633 "num_base_bdevs": 4, 00:13:08.633 "num_base_bdevs_discovered": 3, 00:13:08.633 "num_base_bdevs_operational": 3, 00:13:08.633 "process": { 00:13:08.633 "type": "rebuild", 00:13:08.633 "target": "spare", 00:13:08.633 "progress": { 00:13:08.633 "blocks": 20480, 00:13:08.633 "percent": 32 00:13:08.633 } 00:13:08.633 }, 00:13:08.633 "base_bdevs_list": [ 00:13:08.633 { 00:13:08.633 "name": "spare", 00:13:08.633 "uuid": "2c21acf9-5a2c-567d-bc6f-276acfeb642e", 00:13:08.633 "is_configured": true, 00:13:08.633 "data_offset": 2048, 00:13:08.633 "data_size": 63488 00:13:08.633 }, 00:13:08.633 { 00:13:08.633 "name": null, 00:13:08.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.633 "is_configured": false, 00:13:08.633 "data_offset": 2048, 00:13:08.633 "data_size": 63488 00:13:08.633 }, 00:13:08.633 { 00:13:08.633 "name": "BaseBdev3", 00:13:08.633 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:08.633 "is_configured": true, 00:13:08.633 "data_offset": 2048, 00:13:08.633 "data_size": 63488 00:13:08.633 }, 00:13:08.633 { 00:13:08.633 "name": "BaseBdev4", 00:13:08.633 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:08.633 "is_configured": true, 00:13:08.633 "data_offset": 2048, 00:13:08.633 "data_size": 63488 00:13:08.633 } 00:13:08.633 ] 00:13:08.633 }' 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.633 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.893 [2024-12-07 02:46:19.730751] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.893 [2024-12-07 02:46:19.785671] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:08.893 [2024-12-07 02:46:19.785774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.893 [2024-12-07 02:46:19.785793] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.893 [2024-12-07 02:46:19.785803] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.893 "name": "raid_bdev1", 00:13:08.893 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:08.893 "strip_size_kb": 0, 00:13:08.893 "state": "online", 00:13:08.893 "raid_level": "raid1", 00:13:08.893 "superblock": true, 00:13:08.893 "num_base_bdevs": 4, 00:13:08.893 "num_base_bdevs_discovered": 2, 00:13:08.893 "num_base_bdevs_operational": 2, 00:13:08.893 "base_bdevs_list": [ 00:13:08.893 { 00:13:08.893 "name": null, 00:13:08.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.893 "is_configured": false, 00:13:08.893 "data_offset": 0, 00:13:08.893 "data_size": 63488 00:13:08.893 }, 00:13:08.893 { 00:13:08.893 "name": null, 00:13:08.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.893 "is_configured": false, 00:13:08.893 "data_offset": 2048, 00:13:08.893 "data_size": 63488 00:13:08.893 }, 00:13:08.893 { 00:13:08.893 "name": "BaseBdev3", 00:13:08.893 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:08.893 "is_configured": true, 00:13:08.893 "data_offset": 2048, 00:13:08.893 "data_size": 63488 00:13:08.893 }, 00:13:08.893 { 00:13:08.893 "name": "BaseBdev4", 00:13:08.893 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:08.893 "is_configured": true, 00:13:08.893 "data_offset": 2048, 00:13:08.893 "data_size": 63488 00:13:08.893 } 00:13:08.893 ] 00:13:08.893 }' 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.893 02:46:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.153 02:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:09.153 02:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.153 02:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.413 [2024-12-07 02:46:20.231222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:09.413 [2024-12-07 02:46:20.231331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.413 [2024-12-07 02:46:20.231377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:09.413 [2024-12-07 02:46:20.231430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.413 [2024-12-07 02:46:20.232007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.413 [2024-12-07 02:46:20.232072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:09.413 [2024-12-07 02:46:20.232189] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:09.413 [2024-12-07 02:46:20.232238] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:09.413 [2024-12-07 02:46:20.232285] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:09.413 [2024-12-07 02:46:20.232333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.413 [2024-12-07 02:46:20.237468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:09.413 spare 00:13:09.414 02:46:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.414 02:46:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:09.414 [2024-12-07 02:46:20.239513] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:10.352 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.352 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.352 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.352 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.352 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.353 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.353 02:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.353 02:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.353 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.353 02:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.353 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.353 "name": "raid_bdev1", 00:13:10.353 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:10.353 "strip_size_kb": 0, 00:13:10.353 "state": "online", 00:13:10.353 "raid_level": "raid1", 00:13:10.353 "superblock": true, 00:13:10.353 "num_base_bdevs": 4, 00:13:10.353 "num_base_bdevs_discovered": 3, 00:13:10.353 "num_base_bdevs_operational": 3, 00:13:10.353 "process": { 00:13:10.353 "type": "rebuild", 00:13:10.353 "target": "spare", 00:13:10.353 "progress": { 00:13:10.353 "blocks": 20480, 00:13:10.353 "percent": 32 00:13:10.353 } 00:13:10.353 }, 00:13:10.353 "base_bdevs_list": [ 00:13:10.353 { 00:13:10.353 "name": "spare", 00:13:10.353 "uuid": "2c21acf9-5a2c-567d-bc6f-276acfeb642e", 00:13:10.353 "is_configured": true, 00:13:10.353 "data_offset": 2048, 00:13:10.353 "data_size": 63488 00:13:10.353 }, 00:13:10.353 { 00:13:10.353 "name": null, 00:13:10.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.353 "is_configured": false, 00:13:10.353 "data_offset": 2048, 00:13:10.353 "data_size": 63488 00:13:10.353 }, 00:13:10.353 { 00:13:10.353 "name": "BaseBdev3", 00:13:10.353 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:10.353 "is_configured": true, 00:13:10.353 "data_offset": 2048, 00:13:10.353 "data_size": 63488 00:13:10.353 }, 00:13:10.353 { 00:13:10.353 "name": "BaseBdev4", 00:13:10.353 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:10.353 "is_configured": true, 00:13:10.353 "data_offset": 2048, 00:13:10.353 "data_size": 63488 00:13:10.353 } 00:13:10.353 ] 00:13:10.353 }' 00:13:10.353 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.353 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.353 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.353 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.353 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:10.353 02:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.353 02:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.353 [2024-12-07 02:46:21.407456] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.613 [2024-12-07 02:46:21.447027] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:10.613 [2024-12-07 02:46:21.447130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.613 [2024-12-07 02:46:21.447170] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.613 [2024-12-07 02:46:21.447191] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.613 "name": "raid_bdev1", 00:13:10.613 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:10.613 "strip_size_kb": 0, 00:13:10.613 "state": "online", 00:13:10.613 "raid_level": "raid1", 00:13:10.613 "superblock": true, 00:13:10.613 "num_base_bdevs": 4, 00:13:10.613 "num_base_bdevs_discovered": 2, 00:13:10.613 "num_base_bdevs_operational": 2, 00:13:10.613 "base_bdevs_list": [ 00:13:10.613 { 00:13:10.613 "name": null, 00:13:10.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.613 "is_configured": false, 00:13:10.613 "data_offset": 0, 00:13:10.613 "data_size": 63488 00:13:10.613 }, 00:13:10.613 { 00:13:10.613 "name": null, 00:13:10.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.613 "is_configured": false, 00:13:10.613 "data_offset": 2048, 00:13:10.613 "data_size": 63488 00:13:10.613 }, 00:13:10.613 { 00:13:10.613 "name": "BaseBdev3", 00:13:10.613 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:10.613 "is_configured": true, 00:13:10.613 "data_offset": 2048, 00:13:10.613 "data_size": 63488 00:13:10.613 }, 00:13:10.613 { 00:13:10.613 "name": "BaseBdev4", 00:13:10.613 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:10.613 "is_configured": true, 00:13:10.613 "data_offset": 2048, 00:13:10.613 "data_size": 63488 00:13:10.613 } 00:13:10.613 ] 00:13:10.613 }' 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.613 02:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.881 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.881 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.881 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.881 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.881 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.881 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.881 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.881 02:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.881 02:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.881 02:46:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.881 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.881 "name": "raid_bdev1", 00:13:10.881 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:10.881 "strip_size_kb": 0, 00:13:10.881 "state": "online", 00:13:10.881 "raid_level": "raid1", 00:13:10.881 "superblock": true, 00:13:10.881 "num_base_bdevs": 4, 00:13:10.881 "num_base_bdevs_discovered": 2, 00:13:10.881 "num_base_bdevs_operational": 2, 00:13:10.881 "base_bdevs_list": [ 00:13:10.881 { 00:13:10.881 "name": null, 00:13:10.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.881 "is_configured": false, 00:13:10.881 "data_offset": 0, 00:13:10.881 "data_size": 63488 00:13:10.881 }, 00:13:10.881 { 00:13:10.881 "name": null, 00:13:10.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.881 "is_configured": false, 00:13:10.881 "data_offset": 2048, 00:13:10.881 "data_size": 63488 00:13:10.881 }, 00:13:10.881 { 00:13:10.881 "name": "BaseBdev3", 00:13:10.881 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:10.881 "is_configured": true, 00:13:10.881 "data_offset": 2048, 00:13:10.881 "data_size": 63488 00:13:10.881 }, 00:13:10.881 { 00:13:10.881 "name": "BaseBdev4", 00:13:10.881 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:10.881 "is_configured": true, 00:13:10.881 "data_offset": 2048, 00:13:10.881 "data_size": 63488 00:13:10.881 } 00:13:10.881 ] 00:13:10.881 }' 00:13:10.881 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.140 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.140 02:46:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.140 02:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.140 02:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:11.140 02:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.140 02:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.140 02:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.140 02:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:11.140 02:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.140 02:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.140 [2024-12-07 02:46:22.044722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:11.140 [2024-12-07 02:46:22.044777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.140 [2024-12-07 02:46:22.044800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:11.140 [2024-12-07 02:46:22.044810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.140 [2024-12-07 02:46:22.045291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.140 [2024-12-07 02:46:22.045319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:11.140 [2024-12-07 02:46:22.045412] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:11.140 [2024-12-07 02:46:22.045426] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:11.140 [2024-12-07 02:46:22.045436] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:11.140 [2024-12-07 02:46:22.045449] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:11.140 BaseBdev1 00:13:11.140 02:46:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.140 02:46:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.079 "name": "raid_bdev1", 00:13:12.079 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:12.079 "strip_size_kb": 0, 00:13:12.079 "state": "online", 00:13:12.079 "raid_level": "raid1", 00:13:12.079 "superblock": true, 00:13:12.079 "num_base_bdevs": 4, 00:13:12.079 "num_base_bdevs_discovered": 2, 00:13:12.079 "num_base_bdevs_operational": 2, 00:13:12.079 "base_bdevs_list": [ 00:13:12.079 { 00:13:12.079 "name": null, 00:13:12.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.079 "is_configured": false, 00:13:12.079 "data_offset": 0, 00:13:12.079 "data_size": 63488 00:13:12.079 }, 00:13:12.079 { 00:13:12.079 "name": null, 00:13:12.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.079 "is_configured": false, 00:13:12.079 "data_offset": 2048, 00:13:12.079 "data_size": 63488 00:13:12.079 }, 00:13:12.079 { 00:13:12.079 "name": "BaseBdev3", 00:13:12.079 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:12.079 "is_configured": true, 00:13:12.079 "data_offset": 2048, 00:13:12.079 "data_size": 63488 00:13:12.079 }, 00:13:12.079 { 00:13:12.079 "name": "BaseBdev4", 00:13:12.079 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:12.079 "is_configured": true, 00:13:12.079 "data_offset": 2048, 00:13:12.079 "data_size": 63488 00:13:12.079 } 00:13:12.079 ] 00:13:12.079 }' 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.079 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.649 "name": "raid_bdev1", 00:13:12.649 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:12.649 "strip_size_kb": 0, 00:13:12.649 "state": "online", 00:13:12.649 "raid_level": "raid1", 00:13:12.649 "superblock": true, 00:13:12.649 "num_base_bdevs": 4, 00:13:12.649 "num_base_bdevs_discovered": 2, 00:13:12.649 "num_base_bdevs_operational": 2, 00:13:12.649 "base_bdevs_list": [ 00:13:12.649 { 00:13:12.649 "name": null, 00:13:12.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.649 "is_configured": false, 00:13:12.649 "data_offset": 0, 00:13:12.649 "data_size": 63488 00:13:12.649 }, 00:13:12.649 { 00:13:12.649 "name": null, 00:13:12.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.649 "is_configured": false, 00:13:12.649 "data_offset": 2048, 00:13:12.649 "data_size": 63488 00:13:12.649 }, 00:13:12.649 { 00:13:12.649 "name": "BaseBdev3", 00:13:12.649 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:12.649 "is_configured": true, 00:13:12.649 "data_offset": 2048, 00:13:12.649 "data_size": 63488 00:13:12.649 }, 00:13:12.649 { 00:13:12.649 "name": "BaseBdev4", 00:13:12.649 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:12.649 "is_configured": true, 00:13:12.649 "data_offset": 2048, 00:13:12.649 "data_size": 63488 00:13:12.649 } 00:13:12.649 ] 00:13:12.649 }' 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:12.649 [2024-12-07 02:46:23.654125] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:12.649 [2024-12-07 02:46:23.654361] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:12.649 [2024-12-07 02:46:23.654432] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:12.649 request: 00:13:12.649 { 00:13:12.649 "base_bdev": "BaseBdev1", 00:13:12.649 "raid_bdev": "raid_bdev1", 00:13:12.649 "method": "bdev_raid_add_base_bdev", 00:13:12.649 "req_id": 1 00:13:12.649 } 00:13:12.649 Got JSON-RPC error response 00:13:12.649 response: 00:13:12.649 { 00:13:12.649 "code": -22, 00:13:12.649 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:12.649 } 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:12.649 02:46:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.029 "name": "raid_bdev1", 00:13:14.029 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:14.029 "strip_size_kb": 0, 00:13:14.029 "state": "online", 00:13:14.029 "raid_level": "raid1", 00:13:14.029 "superblock": true, 00:13:14.029 "num_base_bdevs": 4, 00:13:14.029 "num_base_bdevs_discovered": 2, 00:13:14.029 "num_base_bdevs_operational": 2, 00:13:14.029 "base_bdevs_list": [ 00:13:14.029 { 00:13:14.029 "name": null, 00:13:14.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.029 "is_configured": false, 00:13:14.029 "data_offset": 0, 00:13:14.029 "data_size": 63488 00:13:14.029 }, 00:13:14.029 { 00:13:14.029 "name": null, 00:13:14.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.029 "is_configured": false, 00:13:14.029 "data_offset": 2048, 00:13:14.029 "data_size": 63488 00:13:14.029 }, 00:13:14.029 { 00:13:14.029 "name": "BaseBdev3", 00:13:14.029 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:14.029 "is_configured": true, 00:13:14.029 "data_offset": 2048, 00:13:14.029 "data_size": 63488 00:13:14.029 }, 00:13:14.029 { 00:13:14.029 "name": "BaseBdev4", 00:13:14.029 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:14.029 "is_configured": true, 00:13:14.029 "data_offset": 2048, 00:13:14.029 "data_size": 63488 00:13:14.029 } 00:13:14.029 ] 00:13:14.029 }' 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.029 02:46:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.294 "name": "raid_bdev1", 00:13:14.294 "uuid": "3ac9f25a-338e-4fbe-bb4b-85e6a0bad517", 00:13:14.294 "strip_size_kb": 0, 00:13:14.294 "state": "online", 00:13:14.294 "raid_level": "raid1", 00:13:14.294 "superblock": true, 00:13:14.294 "num_base_bdevs": 4, 00:13:14.294 "num_base_bdevs_discovered": 2, 00:13:14.294 "num_base_bdevs_operational": 2, 00:13:14.294 "base_bdevs_list": [ 00:13:14.294 { 00:13:14.294 "name": null, 00:13:14.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.294 "is_configured": false, 00:13:14.294 "data_offset": 0, 00:13:14.294 "data_size": 63488 00:13:14.294 }, 00:13:14.294 { 00:13:14.294 "name": null, 00:13:14.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.294 "is_configured": false, 00:13:14.294 "data_offset": 2048, 00:13:14.294 "data_size": 63488 00:13:14.294 }, 00:13:14.294 { 00:13:14.294 "name": "BaseBdev3", 00:13:14.294 "uuid": "63ef24fc-0ef1-541a-ab75-7daa2afadd5b", 00:13:14.294 "is_configured": true, 00:13:14.294 "data_offset": 2048, 00:13:14.294 "data_size": 63488 00:13:14.294 }, 00:13:14.294 { 00:13:14.294 "name": "BaseBdev4", 00:13:14.294 "uuid": "a152bbdc-5aa6-5d82-ab9c-0dceec1c2065", 00:13:14.294 "is_configured": true, 00:13:14.294 "data_offset": 2048, 00:13:14.294 "data_size": 63488 00:13:14.294 } 00:13:14.294 ] 00:13:14.294 }' 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88874 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88874 ']' 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88874 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88874 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.294 killing process with pid 88874 00:13:14.294 Received shutdown signal, test time was about 60.000000 seconds 00:13:14.294 00:13:14.294 Latency(us) 00:13:14.294 [2024-12-07T02:46:25.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.294 [2024-12-07T02:46:25.372Z] =================================================================================================================== 00:13:14.294 [2024-12-07T02:46:25.372Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88874' 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88874 00:13:14.294 [2024-12-07 02:46:25.298544] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:14.294 [2024-12-07 02:46:25.298696] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.294 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88874 00:13:14.294 [2024-12-07 02:46:25.298764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.294 [2024-12-07 02:46:25.298778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:14.553 [2024-12-07 02:46:25.396162] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:14.813 00:13:14.813 real 0m23.130s 00:13:14.813 user 0m28.349s 00:13:14.813 sys 0m3.879s 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.813 ************************************ 00:13:14.813 END TEST raid_rebuild_test_sb 00:13:14.813 ************************************ 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.813 02:46:25 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:14.813 02:46:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:14.813 02:46:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.813 02:46:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.813 ************************************ 00:13:14.813 START TEST raid_rebuild_test_io 00:13:14.813 ************************************ 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89611 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89611 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89611 ']' 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:14.813 02:46:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.073 [2024-12-07 02:46:25.957972] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:15.073 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:15.073 Zero copy mechanism will not be used. 00:13:15.073 [2024-12-07 02:46:25.958199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89611 ] 00:13:15.073 [2024-12-07 02:46:26.122527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.350 [2024-12-07 02:46:26.194978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.350 [2024-12-07 02:46:26.271712] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.350 [2024-12-07 02:46:26.271842] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.949 BaseBdev1_malloc 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.949 [2024-12-07 02:46:26.826543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.949 [2024-12-07 02:46:26.826629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.949 [2024-12-07 02:46:26.826663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:15.949 [2024-12-07 02:46:26.826680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.949 [2024-12-07 02:46:26.829107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.949 [2024-12-07 02:46:26.829143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.949 BaseBdev1 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.949 BaseBdev2_malloc 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.949 [2024-12-07 02:46:26.875806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:15.949 [2024-12-07 02:46:26.876047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.949 [2024-12-07 02:46:26.876115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:15.949 [2024-12-07 02:46:26.876142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.949 [2024-12-07 02:46:26.881488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.949 [2024-12-07 02:46:26.881543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:15.949 BaseBdev2 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.949 BaseBdev3_malloc 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.949 [2024-12-07 02:46:26.913126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:15.949 [2024-12-07 02:46:26.913171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.949 [2024-12-07 02:46:26.913199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:15.949 [2024-12-07 02:46:26.913208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.949 [2024-12-07 02:46:26.915521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.949 [2024-12-07 02:46:26.915614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:15.949 BaseBdev3 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.949 BaseBdev4_malloc 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.949 [2024-12-07 02:46:26.947649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:15.949 [2024-12-07 02:46:26.947706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.949 [2024-12-07 02:46:26.947734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:15.949 [2024-12-07 02:46:26.947742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.949 [2024-12-07 02:46:26.950085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.949 [2024-12-07 02:46:26.950116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:15.949 BaseBdev4 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.949 spare_malloc 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.949 spare_delay 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.949 [2024-12-07 02:46:26.994252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:15.949 [2024-12-07 02:46:26.994300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.949 [2024-12-07 02:46:26.994321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:15.949 [2024-12-07 02:46:26.994330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.949 [2024-12-07 02:46:26.996843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.949 [2024-12-07 02:46:26.996928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:15.949 spare 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.949 02:46:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.949 [2024-12-07 02:46:27.006316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.949 [2024-12-07 02:46:27.008462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.949 [2024-12-07 02:46:27.008528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:15.949 [2024-12-07 02:46:27.008568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:15.949 [2024-12-07 02:46:27.008659] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:15.949 [2024-12-07 02:46:27.008669] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:15.949 [2024-12-07 02:46:27.008895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:15.950 [2024-12-07 02:46:27.009031] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:15.950 [2024-12-07 02:46:27.009068] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:15.950 [2024-12-07 02:46:27.009211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.950 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.209 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.209 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.209 "name": "raid_bdev1", 00:13:16.209 "uuid": "d6856503-afc5-49d5-a350-b14ecbac1d95", 00:13:16.209 "strip_size_kb": 0, 00:13:16.209 "state": "online", 00:13:16.209 "raid_level": "raid1", 00:13:16.209 "superblock": false, 00:13:16.209 "num_base_bdevs": 4, 00:13:16.209 "num_base_bdevs_discovered": 4, 00:13:16.209 "num_base_bdevs_operational": 4, 00:13:16.209 "base_bdevs_list": [ 00:13:16.209 { 00:13:16.209 "name": "BaseBdev1", 00:13:16.209 "uuid": "fb5010e7-38cd-5c18-b1e0-1f743b383d51", 00:13:16.209 "is_configured": true, 00:13:16.209 "data_offset": 0, 00:13:16.209 "data_size": 65536 00:13:16.209 }, 00:13:16.209 { 00:13:16.209 "name": "BaseBdev2", 00:13:16.209 "uuid": "b1d78d13-59ee-5218-8cb6-b5e48fe0b35f", 00:13:16.209 "is_configured": true, 00:13:16.209 "data_offset": 0, 00:13:16.209 "data_size": 65536 00:13:16.209 }, 00:13:16.209 { 00:13:16.209 "name": "BaseBdev3", 00:13:16.209 "uuid": "649a3932-32d6-5a58-9511-ad3f855d82f1", 00:13:16.209 "is_configured": true, 00:13:16.209 "data_offset": 0, 00:13:16.209 "data_size": 65536 00:13:16.209 }, 00:13:16.209 { 00:13:16.209 "name": "BaseBdev4", 00:13:16.209 "uuid": "a95a7c50-e623-5ef5-be67-12986050366d", 00:13:16.209 "is_configured": true, 00:13:16.209 "data_offset": 0, 00:13:16.209 "data_size": 65536 00:13:16.209 } 00:13:16.209 ] 00:13:16.209 }' 00:13:16.209 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.209 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.468 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:16.468 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:16.468 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.468 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.468 [2024-12-07 02:46:27.493738] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.468 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.468 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:16.468 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:16.468 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.468 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.468 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.727 [2024-12-07 02:46:27.569284] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.727 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.728 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.728 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.728 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.728 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.728 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.728 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.728 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.728 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.728 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.728 "name": "raid_bdev1", 00:13:16.728 "uuid": "d6856503-afc5-49d5-a350-b14ecbac1d95", 00:13:16.728 "strip_size_kb": 0, 00:13:16.728 "state": "online", 00:13:16.728 "raid_level": "raid1", 00:13:16.728 "superblock": false, 00:13:16.728 "num_base_bdevs": 4, 00:13:16.728 "num_base_bdevs_discovered": 3, 00:13:16.728 "num_base_bdevs_operational": 3, 00:13:16.728 "base_bdevs_list": [ 00:13:16.728 { 00:13:16.728 "name": null, 00:13:16.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.728 "is_configured": false, 00:13:16.728 "data_offset": 0, 00:13:16.728 "data_size": 65536 00:13:16.728 }, 00:13:16.728 { 00:13:16.728 "name": "BaseBdev2", 00:13:16.728 "uuid": "b1d78d13-59ee-5218-8cb6-b5e48fe0b35f", 00:13:16.728 "is_configured": true, 00:13:16.728 "data_offset": 0, 00:13:16.728 "data_size": 65536 00:13:16.728 }, 00:13:16.728 { 00:13:16.728 "name": "BaseBdev3", 00:13:16.728 "uuid": "649a3932-32d6-5a58-9511-ad3f855d82f1", 00:13:16.728 "is_configured": true, 00:13:16.728 "data_offset": 0, 00:13:16.728 "data_size": 65536 00:13:16.728 }, 00:13:16.728 { 00:13:16.728 "name": "BaseBdev4", 00:13:16.728 "uuid": "a95a7c50-e623-5ef5-be67-12986050366d", 00:13:16.728 "is_configured": true, 00:13:16.728 "data_offset": 0, 00:13:16.728 "data_size": 65536 00:13:16.728 } 00:13:16.728 ] 00:13:16.728 }' 00:13:16.728 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.728 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.728 [2024-12-07 02:46:27.660449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:16.728 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:16.728 Zero copy mechanism will not be used. 00:13:16.728 Running I/O for 60 seconds... 00:13:16.987 02:46:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:16.987 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.987 02:46:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.987 [2024-12-07 02:46:27.991288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.987 02:46:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.987 02:46:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:17.246 [2024-12-07 02:46:28.067867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:17.246 [2024-12-07 02:46:28.070275] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:17.246 [2024-12-07 02:46:28.192371] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:17.246 [2024-12-07 02:46:28.194351] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:17.506 [2024-12-07 02:46:28.398952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:17.506 [2024-12-07 02:46:28.399487] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:17.765 140.00 IOPS, 420.00 MiB/s [2024-12-07T02:46:28.843Z] [2024-12-07 02:46:28.754881] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:17.765 [2024-12-07 02:46:28.755360] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:18.024 [2024-12-07 02:46:28.959519] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:18.024 [2024-12-07 02:46:28.960481] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:18.024 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.024 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.024 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.024 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.024 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.024 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.024 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.024 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.024 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.024 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.024 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.024 "name": "raid_bdev1", 00:13:18.024 "uuid": "d6856503-afc5-49d5-a350-b14ecbac1d95", 00:13:18.024 "strip_size_kb": 0, 00:13:18.024 "state": "online", 00:13:18.024 "raid_level": "raid1", 00:13:18.024 "superblock": false, 00:13:18.024 "num_base_bdevs": 4, 00:13:18.024 "num_base_bdevs_discovered": 4, 00:13:18.024 "num_base_bdevs_operational": 4, 00:13:18.024 "process": { 00:13:18.024 "type": "rebuild", 00:13:18.024 "target": "spare", 00:13:18.024 "progress": { 00:13:18.024 "blocks": 10240, 00:13:18.025 "percent": 15 00:13:18.025 } 00:13:18.025 }, 00:13:18.025 "base_bdevs_list": [ 00:13:18.025 { 00:13:18.025 "name": "spare", 00:13:18.025 "uuid": "552648cf-5ae7-5b3f-b260-aac78a1859d3", 00:13:18.025 "is_configured": true, 00:13:18.025 "data_offset": 0, 00:13:18.025 "data_size": 65536 00:13:18.025 }, 00:13:18.025 { 00:13:18.025 "name": "BaseBdev2", 00:13:18.025 "uuid": "b1d78d13-59ee-5218-8cb6-b5e48fe0b35f", 00:13:18.025 "is_configured": true, 00:13:18.025 "data_offset": 0, 00:13:18.025 "data_size": 65536 00:13:18.025 }, 00:13:18.025 { 00:13:18.025 "name": "BaseBdev3", 00:13:18.025 "uuid": "649a3932-32d6-5a58-9511-ad3f855d82f1", 00:13:18.025 "is_configured": true, 00:13:18.025 "data_offset": 0, 00:13:18.025 "data_size": 65536 00:13:18.025 }, 00:13:18.025 { 00:13:18.025 "name": "BaseBdev4", 00:13:18.025 "uuid": "a95a7c50-e623-5ef5-be67-12986050366d", 00:13:18.025 "is_configured": true, 00:13:18.025 "data_offset": 0, 00:13:18.025 "data_size": 65536 00:13:18.025 } 00:13:18.025 ] 00:13:18.025 }' 00:13:18.025 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.284 [2024-12-07 02:46:29.184001] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.284 [2024-12-07 02:46:29.283777] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:18.284 [2024-12-07 02:46:29.293600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.284 [2024-12-07 02:46:29.293692] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.284 [2024-12-07 02:46:29.293720] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:18.284 [2024-12-07 02:46:29.315016] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.284 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.544 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.544 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.544 "name": "raid_bdev1", 00:13:18.544 "uuid": "d6856503-afc5-49d5-a350-b14ecbac1d95", 00:13:18.544 "strip_size_kb": 0, 00:13:18.544 "state": "online", 00:13:18.544 "raid_level": "raid1", 00:13:18.544 "superblock": false, 00:13:18.544 "num_base_bdevs": 4, 00:13:18.544 "num_base_bdevs_discovered": 3, 00:13:18.544 "num_base_bdevs_operational": 3, 00:13:18.544 "base_bdevs_list": [ 00:13:18.544 { 00:13:18.544 "name": null, 00:13:18.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.544 "is_configured": false, 00:13:18.544 "data_offset": 0, 00:13:18.544 "data_size": 65536 00:13:18.544 }, 00:13:18.544 { 00:13:18.544 "name": "BaseBdev2", 00:13:18.544 "uuid": "b1d78d13-59ee-5218-8cb6-b5e48fe0b35f", 00:13:18.544 "is_configured": true, 00:13:18.544 "data_offset": 0, 00:13:18.544 "data_size": 65536 00:13:18.544 }, 00:13:18.544 { 00:13:18.544 "name": "BaseBdev3", 00:13:18.544 "uuid": "649a3932-32d6-5a58-9511-ad3f855d82f1", 00:13:18.544 "is_configured": true, 00:13:18.544 "data_offset": 0, 00:13:18.544 "data_size": 65536 00:13:18.544 }, 00:13:18.544 { 00:13:18.544 "name": "BaseBdev4", 00:13:18.544 "uuid": "a95a7c50-e623-5ef5-be67-12986050366d", 00:13:18.544 "is_configured": true, 00:13:18.544 "data_offset": 0, 00:13:18.544 "data_size": 65536 00:13:18.544 } 00:13:18.544 ] 00:13:18.544 }' 00:13:18.544 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.544 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.803 132.50 IOPS, 397.50 MiB/s [2024-12-07T02:46:29.881Z] 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.803 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.803 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.803 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.803 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.803 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.803 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.803 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.803 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.803 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.803 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.803 "name": "raid_bdev1", 00:13:18.803 "uuid": "d6856503-afc5-49d5-a350-b14ecbac1d95", 00:13:18.803 "strip_size_kb": 0, 00:13:18.803 "state": "online", 00:13:18.803 "raid_level": "raid1", 00:13:18.803 "superblock": false, 00:13:18.803 "num_base_bdevs": 4, 00:13:18.803 "num_base_bdevs_discovered": 3, 00:13:18.803 "num_base_bdevs_operational": 3, 00:13:18.803 "base_bdevs_list": [ 00:13:18.803 { 00:13:18.803 "name": null, 00:13:18.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.803 "is_configured": false, 00:13:18.803 "data_offset": 0, 00:13:18.803 "data_size": 65536 00:13:18.803 }, 00:13:18.803 { 00:13:18.803 "name": "BaseBdev2", 00:13:18.803 "uuid": "b1d78d13-59ee-5218-8cb6-b5e48fe0b35f", 00:13:18.803 "is_configured": true, 00:13:18.803 "data_offset": 0, 00:13:18.803 "data_size": 65536 00:13:18.803 }, 00:13:18.803 { 00:13:18.803 "name": "BaseBdev3", 00:13:18.803 "uuid": "649a3932-32d6-5a58-9511-ad3f855d82f1", 00:13:18.803 "is_configured": true, 00:13:18.803 "data_offset": 0, 00:13:18.803 "data_size": 65536 00:13:18.803 }, 00:13:18.803 { 00:13:18.803 "name": "BaseBdev4", 00:13:18.803 "uuid": "a95a7c50-e623-5ef5-be67-12986050366d", 00:13:18.803 "is_configured": true, 00:13:18.803 "data_offset": 0, 00:13:18.803 "data_size": 65536 00:13:18.803 } 00:13:18.803 ] 00:13:18.803 }' 00:13:18.803 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.803 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.803 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.063 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:19.063 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:19.063 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.063 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.063 [2024-12-07 02:46:29.905795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.063 02:46:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.063 02:46:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:19.063 [2024-12-07 02:46:29.967481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:19.063 [2024-12-07 02:46:29.969861] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.063 [2024-12-07 02:46:30.086670] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:19.063 [2024-12-07 02:46:30.087172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:19.323 [2024-12-07 02:46:30.220139] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:19.582 [2024-12-07 02:46:30.561950] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:19.582 [2024-12-07 02:46:30.562470] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:19.842 141.67 IOPS, 425.00 MiB/s [2024-12-07T02:46:30.920Z] [2024-12-07 02:46:30.775817] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:19.842 [2024-12-07 02:46:30.777061] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:20.101 02:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.101 02:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.101 02:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.101 02:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.101 02:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.101 02:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.101 02:46:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.101 02:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.101 02:46:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.101 02:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.101 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.101 "name": "raid_bdev1", 00:13:20.101 "uuid": "d6856503-afc5-49d5-a350-b14ecbac1d95", 00:13:20.101 "strip_size_kb": 0, 00:13:20.101 "state": "online", 00:13:20.101 "raid_level": "raid1", 00:13:20.101 "superblock": false, 00:13:20.101 "num_base_bdevs": 4, 00:13:20.101 "num_base_bdevs_discovered": 4, 00:13:20.101 "num_base_bdevs_operational": 4, 00:13:20.101 "process": { 00:13:20.101 "type": "rebuild", 00:13:20.101 "target": "spare", 00:13:20.101 "progress": { 00:13:20.101 "blocks": 10240, 00:13:20.101 "percent": 15 00:13:20.101 } 00:13:20.101 }, 00:13:20.101 "base_bdevs_list": [ 00:13:20.101 { 00:13:20.101 "name": "spare", 00:13:20.101 "uuid": "552648cf-5ae7-5b3f-b260-aac78a1859d3", 00:13:20.101 "is_configured": true, 00:13:20.101 "data_offset": 0, 00:13:20.101 "data_size": 65536 00:13:20.101 }, 00:13:20.101 { 00:13:20.101 "name": "BaseBdev2", 00:13:20.101 "uuid": "b1d78d13-59ee-5218-8cb6-b5e48fe0b35f", 00:13:20.101 "is_configured": true, 00:13:20.101 "data_offset": 0, 00:13:20.101 "data_size": 65536 00:13:20.101 }, 00:13:20.101 { 00:13:20.101 "name": "BaseBdev3", 00:13:20.101 "uuid": "649a3932-32d6-5a58-9511-ad3f855d82f1", 00:13:20.101 "is_configured": true, 00:13:20.101 "data_offset": 0, 00:13:20.101 "data_size": 65536 00:13:20.101 }, 00:13:20.101 { 00:13:20.101 "name": "BaseBdev4", 00:13:20.101 "uuid": "a95a7c50-e623-5ef5-be67-12986050366d", 00:13:20.101 "is_configured": true, 00:13:20.101 "data_offset": 0, 00:13:20.102 "data_size": 65536 00:13:20.102 } 00:13:20.102 ] 00:13:20.102 }' 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.102 [2024-12-07 02:46:31.109983] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:20.102 [2024-12-07 02:46:31.127396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:20.102 [2024-12-07 02:46:31.139279] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:13:20.102 [2024-12-07 02:46:31.139355] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.102 02:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.361 "name": "raid_bdev1", 00:13:20.361 "uuid": "d6856503-afc5-49d5-a350-b14ecbac1d95", 00:13:20.361 "strip_size_kb": 0, 00:13:20.361 "state": "online", 00:13:20.361 "raid_level": "raid1", 00:13:20.361 "superblock": false, 00:13:20.361 "num_base_bdevs": 4, 00:13:20.361 "num_base_bdevs_discovered": 3, 00:13:20.361 "num_base_bdevs_operational": 3, 00:13:20.361 "process": { 00:13:20.361 "type": "rebuild", 00:13:20.361 "target": "spare", 00:13:20.361 "progress": { 00:13:20.361 "blocks": 14336, 00:13:20.361 "percent": 21 00:13:20.361 } 00:13:20.361 }, 00:13:20.361 "base_bdevs_list": [ 00:13:20.361 { 00:13:20.361 "name": "spare", 00:13:20.361 "uuid": "552648cf-5ae7-5b3f-b260-aac78a1859d3", 00:13:20.361 "is_configured": true, 00:13:20.361 "data_offset": 0, 00:13:20.361 "data_size": 65536 00:13:20.361 }, 00:13:20.361 { 00:13:20.361 "name": null, 00:13:20.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.361 "is_configured": false, 00:13:20.361 "data_offset": 0, 00:13:20.361 "data_size": 65536 00:13:20.361 }, 00:13:20.361 { 00:13:20.361 "name": "BaseBdev3", 00:13:20.361 "uuid": "649a3932-32d6-5a58-9511-ad3f855d82f1", 00:13:20.361 "is_configured": true, 00:13:20.361 "data_offset": 0, 00:13:20.361 "data_size": 65536 00:13:20.361 }, 00:13:20.361 { 00:13:20.361 "name": "BaseBdev4", 00:13:20.361 "uuid": "a95a7c50-e623-5ef5-be67-12986050366d", 00:13:20.361 "is_configured": true, 00:13:20.361 "data_offset": 0, 00:13:20.361 "data_size": 65536 00:13:20.361 } 00:13:20.361 ] 00:13:20.361 }' 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.361 [2024-12-07 02:46:31.262547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=404 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.361 "name": "raid_bdev1", 00:13:20.361 "uuid": "d6856503-afc5-49d5-a350-b14ecbac1d95", 00:13:20.361 "strip_size_kb": 0, 00:13:20.361 "state": "online", 00:13:20.361 "raid_level": "raid1", 00:13:20.361 "superblock": false, 00:13:20.361 "num_base_bdevs": 4, 00:13:20.361 "num_base_bdevs_discovered": 3, 00:13:20.361 "num_base_bdevs_operational": 3, 00:13:20.361 "process": { 00:13:20.361 "type": "rebuild", 00:13:20.361 "target": "spare", 00:13:20.361 "progress": { 00:13:20.361 "blocks": 16384, 00:13:20.361 "percent": 25 00:13:20.361 } 00:13:20.361 }, 00:13:20.361 "base_bdevs_list": [ 00:13:20.361 { 00:13:20.361 "name": "spare", 00:13:20.361 "uuid": "552648cf-5ae7-5b3f-b260-aac78a1859d3", 00:13:20.361 "is_configured": true, 00:13:20.361 "data_offset": 0, 00:13:20.361 "data_size": 65536 00:13:20.361 }, 00:13:20.361 { 00:13:20.361 "name": null, 00:13:20.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.361 "is_configured": false, 00:13:20.361 "data_offset": 0, 00:13:20.361 "data_size": 65536 00:13:20.361 }, 00:13:20.361 { 00:13:20.361 "name": "BaseBdev3", 00:13:20.361 "uuid": "649a3932-32d6-5a58-9511-ad3f855d82f1", 00:13:20.361 "is_configured": true, 00:13:20.361 "data_offset": 0, 00:13:20.361 "data_size": 65536 00:13:20.361 }, 00:13:20.361 { 00:13:20.361 "name": "BaseBdev4", 00:13:20.361 "uuid": "a95a7c50-e623-5ef5-be67-12986050366d", 00:13:20.361 "is_configured": true, 00:13:20.361 "data_offset": 0, 00:13:20.361 "data_size": 65536 00:13:20.361 } 00:13:20.361 ] 00:13:20.361 }' 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.361 02:46:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:20.621 [2024-12-07 02:46:31.498002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:20.880 129.50 IOPS, 388.50 MiB/s [2024-12-07T02:46:31.958Z] [2024-12-07 02:46:31.727144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:20.880 [2024-12-07 02:46:31.937244] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:21.450 [2024-12-07 02:46:32.376574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.450 "name": "raid_bdev1", 00:13:21.450 "uuid": "d6856503-afc5-49d5-a350-b14ecbac1d95", 00:13:21.450 "strip_size_kb": 0, 00:13:21.450 "state": "online", 00:13:21.450 "raid_level": "raid1", 00:13:21.450 "superblock": false, 00:13:21.450 "num_base_bdevs": 4, 00:13:21.450 "num_base_bdevs_discovered": 3, 00:13:21.450 "num_base_bdevs_operational": 3, 00:13:21.450 "process": { 00:13:21.450 "type": "rebuild", 00:13:21.450 "target": "spare", 00:13:21.450 "progress": { 00:13:21.450 "blocks": 34816, 00:13:21.450 "percent": 53 00:13:21.450 } 00:13:21.450 }, 00:13:21.450 "base_bdevs_list": [ 00:13:21.450 { 00:13:21.450 "name": "spare", 00:13:21.450 "uuid": "552648cf-5ae7-5b3f-b260-aac78a1859d3", 00:13:21.450 "is_configured": true, 00:13:21.450 "data_offset": 0, 00:13:21.450 "data_size": 65536 00:13:21.450 }, 00:13:21.450 { 00:13:21.450 "name": null, 00:13:21.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.450 "is_configured": false, 00:13:21.450 "data_offset": 0, 00:13:21.450 "data_size": 65536 00:13:21.450 }, 00:13:21.450 { 00:13:21.450 "name": "BaseBdev3", 00:13:21.450 "uuid": "649a3932-32d6-5a58-9511-ad3f855d82f1", 00:13:21.450 "is_configured": true, 00:13:21.450 "data_offset": 0, 00:13:21.450 "data_size": 65536 00:13:21.450 }, 00:13:21.450 { 00:13:21.450 "name": "BaseBdev4", 00:13:21.450 "uuid": "a95a7c50-e623-5ef5-be67-12986050366d", 00:13:21.450 "is_configured": true, 00:13:21.450 "data_offset": 0, 00:13:21.450 "data_size": 65536 00:13:21.450 } 00:13:21.450 ] 00:13:21.450 }' 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.450 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.709 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.709 02:46:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.709 114.40 IOPS, 343.20 MiB/s [2024-12-07T02:46:32.787Z] [2024-12-07 02:46:32.710510] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:21.969 [2024-12-07 02:46:32.812382] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:21.969 [2024-12-07 02:46:32.812692] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:22.538 [2024-12-07 02:46:33.569763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:22.538 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.538 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.538 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.538 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.538 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.538 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.538 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.538 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.538 02:46:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.538 02:46:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.538 02:46:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.798 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.798 "name": "raid_bdev1", 00:13:22.798 "uuid": "d6856503-afc5-49d5-a350-b14ecbac1d95", 00:13:22.798 "strip_size_kb": 0, 00:13:22.798 "state": "online", 00:13:22.798 "raid_level": "raid1", 00:13:22.798 "superblock": false, 00:13:22.798 "num_base_bdevs": 4, 00:13:22.798 "num_base_bdevs_discovered": 3, 00:13:22.798 "num_base_bdevs_operational": 3, 00:13:22.798 "process": { 00:13:22.798 "type": "rebuild", 00:13:22.798 "target": "spare", 00:13:22.798 "progress": { 00:13:22.798 "blocks": 53248, 00:13:22.798 "percent": 81 00:13:22.798 } 00:13:22.798 }, 00:13:22.798 "base_bdevs_list": [ 00:13:22.798 { 00:13:22.798 "name": "spare", 00:13:22.798 "uuid": "552648cf-5ae7-5b3f-b260-aac78a1859d3", 00:13:22.798 "is_configured": true, 00:13:22.798 "data_offset": 0, 00:13:22.798 "data_size": 65536 00:13:22.798 }, 00:13:22.798 { 00:13:22.798 "name": null, 00:13:22.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.798 "is_configured": false, 00:13:22.798 "data_offset": 0, 00:13:22.798 "data_size": 65536 00:13:22.798 }, 00:13:22.798 { 00:13:22.798 "name": "BaseBdev3", 00:13:22.798 "uuid": "649a3932-32d6-5a58-9511-ad3f855d82f1", 00:13:22.798 "is_configured": true, 00:13:22.798 "data_offset": 0, 00:13:22.798 "data_size": 65536 00:13:22.798 }, 00:13:22.798 { 00:13:22.798 "name": "BaseBdev4", 00:13:22.798 "uuid": "a95a7c50-e623-5ef5-be67-12986050366d", 00:13:22.798 "is_configured": true, 00:13:22.798 "data_offset": 0, 00:13:22.798 "data_size": 65536 00:13:22.798 } 00:13:22.798 ] 00:13:22.798 }' 00:13:22.798 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.798 101.00 IOPS, 303.00 MiB/s [2024-12-07T02:46:33.876Z] 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.798 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.798 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.798 02:46:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:23.367 [2024-12-07 02:46:34.347025] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:23.626 [2024-12-07 02:46:34.446875] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:23.626 [2024-12-07 02:46:34.450084] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.886 91.71 IOPS, 275.14 MiB/s [2024-12-07T02:46:34.964Z] 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.886 "name": "raid_bdev1", 00:13:23.886 "uuid": "d6856503-afc5-49d5-a350-b14ecbac1d95", 00:13:23.886 "strip_size_kb": 0, 00:13:23.886 "state": "online", 00:13:23.886 "raid_level": "raid1", 00:13:23.886 "superblock": false, 00:13:23.886 "num_base_bdevs": 4, 00:13:23.886 "num_base_bdevs_discovered": 3, 00:13:23.886 "num_base_bdevs_operational": 3, 00:13:23.886 "base_bdevs_list": [ 00:13:23.886 { 00:13:23.886 "name": "spare", 00:13:23.886 "uuid": "552648cf-5ae7-5b3f-b260-aac78a1859d3", 00:13:23.886 "is_configured": true, 00:13:23.886 "data_offset": 0, 00:13:23.886 "data_size": 65536 00:13:23.886 }, 00:13:23.886 { 00:13:23.886 "name": null, 00:13:23.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.886 "is_configured": false, 00:13:23.886 "data_offset": 0, 00:13:23.886 "data_size": 65536 00:13:23.886 }, 00:13:23.886 { 00:13:23.886 "name": "BaseBdev3", 00:13:23.886 "uuid": "649a3932-32d6-5a58-9511-ad3f855d82f1", 00:13:23.886 "is_configured": true, 00:13:23.886 "data_offset": 0, 00:13:23.886 "data_size": 65536 00:13:23.886 }, 00:13:23.886 { 00:13:23.886 "name": "BaseBdev4", 00:13:23.886 "uuid": "a95a7c50-e623-5ef5-be67-12986050366d", 00:13:23.886 "is_configured": true, 00:13:23.886 "data_offset": 0, 00:13:23.886 "data_size": 65536 00:13:23.886 } 00:13:23.886 ] 00:13:23.886 }' 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.886 "name": "raid_bdev1", 00:13:23.886 "uuid": "d6856503-afc5-49d5-a350-b14ecbac1d95", 00:13:23.886 "strip_size_kb": 0, 00:13:23.886 "state": "online", 00:13:23.886 "raid_level": "raid1", 00:13:23.886 "superblock": false, 00:13:23.886 "num_base_bdevs": 4, 00:13:23.886 "num_base_bdevs_discovered": 3, 00:13:23.886 "num_base_bdevs_operational": 3, 00:13:23.886 "base_bdevs_list": [ 00:13:23.886 { 00:13:23.886 "name": "spare", 00:13:23.886 "uuid": "552648cf-5ae7-5b3f-b260-aac78a1859d3", 00:13:23.886 "is_configured": true, 00:13:23.886 "data_offset": 0, 00:13:23.886 "data_size": 65536 00:13:23.886 }, 00:13:23.886 { 00:13:23.886 "name": null, 00:13:23.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.886 "is_configured": false, 00:13:23.886 "data_offset": 0, 00:13:23.886 "data_size": 65536 00:13:23.886 }, 00:13:23.886 { 00:13:23.886 "name": "BaseBdev3", 00:13:23.886 "uuid": "649a3932-32d6-5a58-9511-ad3f855d82f1", 00:13:23.886 "is_configured": true, 00:13:23.886 "data_offset": 0, 00:13:23.886 "data_size": 65536 00:13:23.886 }, 00:13:23.886 { 00:13:23.886 "name": "BaseBdev4", 00:13:23.886 "uuid": "a95a7c50-e623-5ef5-be67-12986050366d", 00:13:23.886 "is_configured": true, 00:13:23.886 "data_offset": 0, 00:13:23.886 "data_size": 65536 00:13:23.886 } 00:13:23.886 ] 00:13:23.886 }' 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.886 02:46:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.146 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.146 "name": "raid_bdev1", 00:13:24.146 "uuid": "d6856503-afc5-49d5-a350-b14ecbac1d95", 00:13:24.146 "strip_size_kb": 0, 00:13:24.146 "state": "online", 00:13:24.146 "raid_level": "raid1", 00:13:24.146 "superblock": false, 00:13:24.146 "num_base_bdevs": 4, 00:13:24.146 "num_base_bdevs_discovered": 3, 00:13:24.146 "num_base_bdevs_operational": 3, 00:13:24.146 "base_bdevs_list": [ 00:13:24.146 { 00:13:24.146 "name": "spare", 00:13:24.146 "uuid": "552648cf-5ae7-5b3f-b260-aac78a1859d3", 00:13:24.146 "is_configured": true, 00:13:24.146 "data_offset": 0, 00:13:24.146 "data_size": 65536 00:13:24.146 }, 00:13:24.146 { 00:13:24.146 "name": null, 00:13:24.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.146 "is_configured": false, 00:13:24.146 "data_offset": 0, 00:13:24.146 "data_size": 65536 00:13:24.146 }, 00:13:24.146 { 00:13:24.146 "name": "BaseBdev3", 00:13:24.146 "uuid": "649a3932-32d6-5a58-9511-ad3f855d82f1", 00:13:24.146 "is_configured": true, 00:13:24.146 "data_offset": 0, 00:13:24.146 "data_size": 65536 00:13:24.146 }, 00:13:24.146 { 00:13:24.146 "name": "BaseBdev4", 00:13:24.146 "uuid": "a95a7c50-e623-5ef5-be67-12986050366d", 00:13:24.146 "is_configured": true, 00:13:24.146 "data_offset": 0, 00:13:24.146 "data_size": 65536 00:13:24.147 } 00:13:24.147 ] 00:13:24.147 }' 00:13:24.147 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.147 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.405 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:24.405 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.405 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.405 [2024-12-07 02:46:35.430902] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.405 [2024-12-07 02:46:35.431022] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.664 00:13:24.664 Latency(us) 00:13:24.664 [2024-12-07T02:46:35.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.664 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:24.664 raid_bdev1 : 7.88 85.59 256.76 0.00 0.00 15656.00 266.51 112641.79 00:13:24.664 [2024-12-07T02:46:35.742Z] =================================================================================================================== 00:13:24.664 [2024-12-07T02:46:35.742Z] Total : 85.59 256.76 0.00 0.00 15656.00 266.51 112641.79 00:13:24.664 [2024-12-07 02:46:35.526141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.664 [2024-12-07 02:46:35.526222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.664 [2024-12-07 02:46:35.526348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.664 [2024-12-07 02:46:35.526398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:24.664 { 00:13:24.664 "results": [ 00:13:24.664 { 00:13:24.664 "job": "raid_bdev1", 00:13:24.664 "core_mask": "0x1", 00:13:24.664 "workload": "randrw", 00:13:24.664 "percentage": 50, 00:13:24.664 "status": "finished", 00:13:24.664 "queue_depth": 2, 00:13:24.664 "io_size": 3145728, 00:13:24.664 "runtime": 7.875002, 00:13:24.664 "iops": 85.58727985084955, 00:13:24.664 "mibps": 256.7618395525487, 00:13:24.664 "io_failed": 0, 00:13:24.664 "io_timeout": 0, 00:13:24.664 "avg_latency_us": 15655.99574980887, 00:13:24.664 "min_latency_us": 266.5082969432314, 00:13:24.664 "max_latency_us": 112641.78864628822 00:13:24.664 } 00:13:24.664 ], 00:13:24.664 "core_count": 1 00:13:24.664 } 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.664 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:24.923 /dev/nbd0 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.923 1+0 records in 00:13:24.923 1+0 records out 00:13:24.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060856 s, 6.7 MB/s 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.923 02:46:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:25.182 /dev/nbd1 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.182 1+0 records in 00:13:25.182 1+0 records out 00:13:25.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365515 s, 11.2 MB/s 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.182 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:25.441 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:25.700 /dev/nbd1 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.700 1+0 records in 00:13:25.700 1+0 records out 00:13:25.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333353 s, 12.3 MB/s 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.700 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.959 02:46:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89611 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89611 ']' 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89611 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89611 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:26.219 killing process with pid 89611 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89611' 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89611 00:13:26.219 Received shutdown signal, test time was about 9.525978 seconds 00:13:26.219 00:13:26.219 Latency(us) 00:13:26.219 [2024-12-07T02:46:37.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.219 [2024-12-07T02:46:37.297Z] =================================================================================================================== 00:13:26.219 [2024-12-07T02:46:37.297Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:26.219 [2024-12-07 02:46:37.170474] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:26.219 02:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89611 00:13:26.219 [2024-12-07 02:46:37.256222] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:26.789 00:13:26.789 real 0m11.772s 00:13:26.789 user 0m15.021s 00:13:26.789 sys 0m1.931s 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.789 ************************************ 00:13:26.789 END TEST raid_rebuild_test_io 00:13:26.789 ************************************ 00:13:26.789 02:46:37 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:26.789 02:46:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:26.789 02:46:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.789 02:46:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:26.789 ************************************ 00:13:26.789 START TEST raid_rebuild_test_sb_io 00:13:26.789 ************************************ 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89998 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89998 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89998 ']' 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:26.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:26.789 02:46:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.789 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:26.789 Zero copy mechanism will not be used. 00:13:26.789 [2024-12-07 02:46:37.813156] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:26.789 [2024-12-07 02:46:37.813289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89998 ] 00:13:27.049 [2024-12-07 02:46:37.978659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.049 [2024-12-07 02:46:38.051476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.309 [2024-12-07 02:46:38.128666] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.309 [2024-12-07 02:46:38.128706] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.568 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:27.569 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:27.569 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.569 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:27.569 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.569 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.569 BaseBdev1_malloc 00:13:27.569 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.569 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:27.569 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.829 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.829 [2024-12-07 02:46:38.651323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:27.829 [2024-12-07 02:46:38.651397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.829 [2024-12-07 02:46:38.651423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:27.829 [2024-12-07 02:46:38.651446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.829 [2024-12-07 02:46:38.653918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.829 [2024-12-07 02:46:38.653951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:27.829 BaseBdev1 00:13:27.829 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.829 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.829 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:27.829 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.829 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.829 BaseBdev2_malloc 00:13:27.829 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.829 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:27.829 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.829 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.829 [2024-12-07 02:46:38.701088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:27.829 [2024-12-07 02:46:38.701186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.829 [2024-12-07 02:46:38.701231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:27.829 [2024-12-07 02:46:38.701251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.830 [2024-12-07 02:46:38.705880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.830 [2024-12-07 02:46:38.705927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:27.830 BaseBdev2 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.830 BaseBdev3_malloc 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.830 [2024-12-07 02:46:38.738175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:27.830 [2024-12-07 02:46:38.738220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.830 [2024-12-07 02:46:38.738245] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:27.830 [2024-12-07 02:46:38.738254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.830 [2024-12-07 02:46:38.740632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.830 [2024-12-07 02:46:38.740662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:27.830 BaseBdev3 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.830 BaseBdev4_malloc 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.830 [2024-12-07 02:46:38.772430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:27.830 [2024-12-07 02:46:38.772482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.830 [2024-12-07 02:46:38.772507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:27.830 [2024-12-07 02:46:38.772514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.830 [2024-12-07 02:46:38.774828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.830 [2024-12-07 02:46:38.774858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:27.830 BaseBdev4 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.830 spare_malloc 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.830 spare_delay 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.830 [2024-12-07 02:46:38.818698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:27.830 [2024-12-07 02:46:38.818744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.830 [2024-12-07 02:46:38.818765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:27.830 [2024-12-07 02:46:38.818773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.830 [2024-12-07 02:46:38.821163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.830 [2024-12-07 02:46:38.821194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:27.830 spare 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.830 [2024-12-07 02:46:38.830767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.830 [2024-12-07 02:46:38.832833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.830 [2024-12-07 02:46:38.832902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.830 [2024-12-07 02:46:38.832942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:27.830 [2024-12-07 02:46:38.833102] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:13:27.830 [2024-12-07 02:46:38.833115] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:27.830 [2024-12-07 02:46:38.833354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:27.830 [2024-12-07 02:46:38.833515] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:13:27.830 [2024-12-07 02:46:38.833534] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:13:27.830 [2024-12-07 02:46:38.833669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.830 "name": "raid_bdev1", 00:13:27.830 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:27.830 "strip_size_kb": 0, 00:13:27.830 "state": "online", 00:13:27.830 "raid_level": "raid1", 00:13:27.830 "superblock": true, 00:13:27.830 "num_base_bdevs": 4, 00:13:27.830 "num_base_bdevs_discovered": 4, 00:13:27.830 "num_base_bdevs_operational": 4, 00:13:27.830 "base_bdevs_list": [ 00:13:27.830 { 00:13:27.830 "name": "BaseBdev1", 00:13:27.830 "uuid": "f662678d-e6f9-5bca-aaa7-7d1b47792d6e", 00:13:27.830 "is_configured": true, 00:13:27.830 "data_offset": 2048, 00:13:27.830 "data_size": 63488 00:13:27.830 }, 00:13:27.830 { 00:13:27.830 "name": "BaseBdev2", 00:13:27.830 "uuid": "01c1b3ed-6429-5429-bf82-4c3bbb5f4dad", 00:13:27.830 "is_configured": true, 00:13:27.830 "data_offset": 2048, 00:13:27.830 "data_size": 63488 00:13:27.830 }, 00:13:27.830 { 00:13:27.830 "name": "BaseBdev3", 00:13:27.830 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:27.830 "is_configured": true, 00:13:27.830 "data_offset": 2048, 00:13:27.830 "data_size": 63488 00:13:27.830 }, 00:13:27.830 { 00:13:27.830 "name": "BaseBdev4", 00:13:27.830 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:27.830 "is_configured": true, 00:13:27.830 "data_offset": 2048, 00:13:27.830 "data_size": 63488 00:13:27.830 } 00:13:27.830 ] 00:13:27.830 }' 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.830 02:46:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.399 [2024-12-07 02:46:39.298198] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.399 [2024-12-07 02:46:39.369788] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.399 "name": "raid_bdev1", 00:13:28.399 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:28.399 "strip_size_kb": 0, 00:13:28.399 "state": "online", 00:13:28.399 "raid_level": "raid1", 00:13:28.399 "superblock": true, 00:13:28.399 "num_base_bdevs": 4, 00:13:28.399 "num_base_bdevs_discovered": 3, 00:13:28.399 "num_base_bdevs_operational": 3, 00:13:28.399 "base_bdevs_list": [ 00:13:28.399 { 00:13:28.399 "name": null, 00:13:28.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.399 "is_configured": false, 00:13:28.399 "data_offset": 0, 00:13:28.399 "data_size": 63488 00:13:28.399 }, 00:13:28.399 { 00:13:28.399 "name": "BaseBdev2", 00:13:28.399 "uuid": "01c1b3ed-6429-5429-bf82-4c3bbb5f4dad", 00:13:28.399 "is_configured": true, 00:13:28.399 "data_offset": 2048, 00:13:28.399 "data_size": 63488 00:13:28.399 }, 00:13:28.399 { 00:13:28.399 "name": "BaseBdev3", 00:13:28.399 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:28.399 "is_configured": true, 00:13:28.399 "data_offset": 2048, 00:13:28.399 "data_size": 63488 00:13:28.399 }, 00:13:28.399 { 00:13:28.399 "name": "BaseBdev4", 00:13:28.399 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:28.399 "is_configured": true, 00:13:28.399 "data_offset": 2048, 00:13:28.399 "data_size": 63488 00:13:28.399 } 00:13:28.399 ] 00:13:28.399 }' 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.399 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.399 [2024-12-07 02:46:39.461017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:28.399 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:28.399 Zero copy mechanism will not be used. 00:13:28.399 Running I/O for 60 seconds... 00:13:28.969 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:28.969 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.969 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.969 [2024-12-07 02:46:39.827811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.969 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.969 02:46:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:28.969 [2024-12-07 02:46:39.867100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:28.969 [2024-12-07 02:46:39.869400] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:28.969 [2024-12-07 02:46:39.996170] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:29.228 [2024-12-07 02:46:40.238186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:29.228 [2024-12-07 02:46:40.239348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:29.748 190.00 IOPS, 570.00 MiB/s [2024-12-07T02:46:40.826Z] [2024-12-07 02:46:40.692677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:29.748 [2024-12-07 02:46:40.693710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:30.007 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.007 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.007 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.007 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.007 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.007 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.007 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.007 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.008 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.008 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.008 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.008 "name": "raid_bdev1", 00:13:30.008 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:30.008 "strip_size_kb": 0, 00:13:30.008 "state": "online", 00:13:30.008 "raid_level": "raid1", 00:13:30.008 "superblock": true, 00:13:30.008 "num_base_bdevs": 4, 00:13:30.008 "num_base_bdevs_discovered": 4, 00:13:30.008 "num_base_bdevs_operational": 4, 00:13:30.008 "process": { 00:13:30.008 "type": "rebuild", 00:13:30.008 "target": "spare", 00:13:30.008 "progress": { 00:13:30.008 "blocks": 10240, 00:13:30.008 "percent": 16 00:13:30.008 } 00:13:30.008 }, 00:13:30.008 "base_bdevs_list": [ 00:13:30.008 { 00:13:30.008 "name": "spare", 00:13:30.008 "uuid": "401fb972-1b48-5a29-b3c6-effa704d1013", 00:13:30.008 "is_configured": true, 00:13:30.008 "data_offset": 2048, 00:13:30.008 "data_size": 63488 00:13:30.008 }, 00:13:30.008 { 00:13:30.008 "name": "BaseBdev2", 00:13:30.008 "uuid": "01c1b3ed-6429-5429-bf82-4c3bbb5f4dad", 00:13:30.008 "is_configured": true, 00:13:30.008 "data_offset": 2048, 00:13:30.008 "data_size": 63488 00:13:30.008 }, 00:13:30.008 { 00:13:30.008 "name": "BaseBdev3", 00:13:30.008 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:30.008 "is_configured": true, 00:13:30.008 "data_offset": 2048, 00:13:30.008 "data_size": 63488 00:13:30.008 }, 00:13:30.008 { 00:13:30.008 "name": "BaseBdev4", 00:13:30.008 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:30.008 "is_configured": true, 00:13:30.008 "data_offset": 2048, 00:13:30.008 "data_size": 63488 00:13:30.008 } 00:13:30.008 ] 00:13:30.008 }' 00:13:30.008 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.008 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.008 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.008 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.008 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:30.008 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.008 02:46:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.008 [2024-12-07 02:46:40.979671] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.008 [2024-12-07 02:46:41.035872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:30.268 [2024-12-07 02:46:41.148572] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:30.268 [2024-12-07 02:46:41.170257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.268 [2024-12-07 02:46:41.170329] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.268 [2024-12-07 02:46:41.170358] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:30.268 [2024-12-07 02:46:41.204674] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.268 "name": "raid_bdev1", 00:13:30.268 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:30.268 "strip_size_kb": 0, 00:13:30.268 "state": "online", 00:13:30.268 "raid_level": "raid1", 00:13:30.268 "superblock": true, 00:13:30.268 "num_base_bdevs": 4, 00:13:30.268 "num_base_bdevs_discovered": 3, 00:13:30.268 "num_base_bdevs_operational": 3, 00:13:30.268 "base_bdevs_list": [ 00:13:30.268 { 00:13:30.268 "name": null, 00:13:30.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.268 "is_configured": false, 00:13:30.268 "data_offset": 0, 00:13:30.268 "data_size": 63488 00:13:30.268 }, 00:13:30.268 { 00:13:30.268 "name": "BaseBdev2", 00:13:30.268 "uuid": "01c1b3ed-6429-5429-bf82-4c3bbb5f4dad", 00:13:30.268 "is_configured": true, 00:13:30.268 "data_offset": 2048, 00:13:30.268 "data_size": 63488 00:13:30.268 }, 00:13:30.268 { 00:13:30.268 "name": "BaseBdev3", 00:13:30.268 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:30.268 "is_configured": true, 00:13:30.268 "data_offset": 2048, 00:13:30.268 "data_size": 63488 00:13:30.268 }, 00:13:30.268 { 00:13:30.268 "name": "BaseBdev4", 00:13:30.268 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:30.268 "is_configured": true, 00:13:30.268 "data_offset": 2048, 00:13:30.268 "data_size": 63488 00:13:30.268 } 00:13:30.268 ] 00:13:30.268 }' 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.268 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.787 149.00 IOPS, 447.00 MiB/s [2024-12-07T02:46:41.865Z] 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.787 "name": "raid_bdev1", 00:13:30.787 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:30.787 "strip_size_kb": 0, 00:13:30.787 "state": "online", 00:13:30.787 "raid_level": "raid1", 00:13:30.787 "superblock": true, 00:13:30.787 "num_base_bdevs": 4, 00:13:30.787 "num_base_bdevs_discovered": 3, 00:13:30.787 "num_base_bdevs_operational": 3, 00:13:30.787 "base_bdevs_list": [ 00:13:30.787 { 00:13:30.787 "name": null, 00:13:30.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.787 "is_configured": false, 00:13:30.787 "data_offset": 0, 00:13:30.787 "data_size": 63488 00:13:30.787 }, 00:13:30.787 { 00:13:30.787 "name": "BaseBdev2", 00:13:30.787 "uuid": "01c1b3ed-6429-5429-bf82-4c3bbb5f4dad", 00:13:30.787 "is_configured": true, 00:13:30.787 "data_offset": 2048, 00:13:30.787 "data_size": 63488 00:13:30.787 }, 00:13:30.787 { 00:13:30.787 "name": "BaseBdev3", 00:13:30.787 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:30.787 "is_configured": true, 00:13:30.787 "data_offset": 2048, 00:13:30.787 "data_size": 63488 00:13:30.787 }, 00:13:30.787 { 00:13:30.787 "name": "BaseBdev4", 00:13:30.787 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:30.787 "is_configured": true, 00:13:30.787 "data_offset": 2048, 00:13:30.787 "data_size": 63488 00:13:30.787 } 00:13:30.787 ] 00:13:30.787 }' 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.787 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:30.788 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.788 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.788 [2024-12-07 02:46:41.807373] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:30.788 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.788 02:46:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:30.788 [2024-12-07 02:46:41.863540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:31.047 [2024-12-07 02:46:41.865844] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:31.047 [2024-12-07 02:46:41.990552] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:31.306 [2024-12-07 02:46:42.205768] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:31.306 [2024-12-07 02:46:42.206789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:31.566 155.33 IOPS, 466.00 MiB/s [2024-12-07T02:46:42.644Z] [2024-12-07 02:46:42.557730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:31.826 [2024-12-07 02:46:42.803162] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:31.826 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.826 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.826 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.826 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.826 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.826 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.826 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.826 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.826 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.826 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.085 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.085 "name": "raid_bdev1", 00:13:32.085 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:32.086 "strip_size_kb": 0, 00:13:32.086 "state": "online", 00:13:32.086 "raid_level": "raid1", 00:13:32.086 "superblock": true, 00:13:32.086 "num_base_bdevs": 4, 00:13:32.086 "num_base_bdevs_discovered": 4, 00:13:32.086 "num_base_bdevs_operational": 4, 00:13:32.086 "process": { 00:13:32.086 "type": "rebuild", 00:13:32.086 "target": "spare", 00:13:32.086 "progress": { 00:13:32.086 "blocks": 10240, 00:13:32.086 "percent": 16 00:13:32.086 } 00:13:32.086 }, 00:13:32.086 "base_bdevs_list": [ 00:13:32.086 { 00:13:32.086 "name": "spare", 00:13:32.086 "uuid": "401fb972-1b48-5a29-b3c6-effa704d1013", 00:13:32.086 "is_configured": true, 00:13:32.086 "data_offset": 2048, 00:13:32.086 "data_size": 63488 00:13:32.086 }, 00:13:32.086 { 00:13:32.086 "name": "BaseBdev2", 00:13:32.086 "uuid": "01c1b3ed-6429-5429-bf82-4c3bbb5f4dad", 00:13:32.086 "is_configured": true, 00:13:32.086 "data_offset": 2048, 00:13:32.086 "data_size": 63488 00:13:32.086 }, 00:13:32.086 { 00:13:32.086 "name": "BaseBdev3", 00:13:32.086 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:32.086 "is_configured": true, 00:13:32.086 "data_offset": 2048, 00:13:32.086 "data_size": 63488 00:13:32.086 }, 00:13:32.086 { 00:13:32.086 "name": "BaseBdev4", 00:13:32.086 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:32.086 "is_configured": true, 00:13:32.086 "data_offset": 2048, 00:13:32.086 "data_size": 63488 00:13:32.086 } 00:13:32.086 ] 00:13:32.086 }' 00:13:32.086 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.086 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.086 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.086 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.086 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:32.086 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:32.086 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:32.086 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:32.086 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:32.086 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:32.086 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:32.086 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.086 02:46:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.086 [2024-12-07 02:46:42.994296] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:32.086 [2024-12-07 02:46:43.071696] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:32.345 [2024-12-07 02:46:43.281798] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:13:32.345 [2024-12-07 02:46:43.281838] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.345 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.345 "name": "raid_bdev1", 00:13:32.345 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:32.345 "strip_size_kb": 0, 00:13:32.346 "state": "online", 00:13:32.346 "raid_level": "raid1", 00:13:32.346 "superblock": true, 00:13:32.346 "num_base_bdevs": 4, 00:13:32.346 "num_base_bdevs_discovered": 3, 00:13:32.346 "num_base_bdevs_operational": 3, 00:13:32.346 "process": { 00:13:32.346 "type": "rebuild", 00:13:32.346 "target": "spare", 00:13:32.346 "progress": { 00:13:32.346 "blocks": 14336, 00:13:32.346 "percent": 22 00:13:32.346 } 00:13:32.346 }, 00:13:32.346 "base_bdevs_list": [ 00:13:32.346 { 00:13:32.346 "name": "spare", 00:13:32.346 "uuid": "401fb972-1b48-5a29-b3c6-effa704d1013", 00:13:32.346 "is_configured": true, 00:13:32.346 "data_offset": 2048, 00:13:32.346 "data_size": 63488 00:13:32.346 }, 00:13:32.346 { 00:13:32.346 "name": null, 00:13:32.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.346 "is_configured": false, 00:13:32.346 "data_offset": 0, 00:13:32.346 "data_size": 63488 00:13:32.346 }, 00:13:32.346 { 00:13:32.346 "name": "BaseBdev3", 00:13:32.346 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:32.346 "is_configured": true, 00:13:32.346 "data_offset": 2048, 00:13:32.346 "data_size": 63488 00:13:32.346 }, 00:13:32.346 { 00:13:32.346 "name": "BaseBdev4", 00:13:32.346 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:32.346 "is_configured": true, 00:13:32.346 "data_offset": 2048, 00:13:32.346 "data_size": 63488 00:13:32.346 } 00:13:32.346 ] 00:13:32.346 }' 00:13:32.346 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.346 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.346 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.346 [2024-12-07 02:46:43.416144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:32.605 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=416 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.606 136.25 IOPS, 408.75 MiB/s [2024-12-07T02:46:43.684Z] 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.606 "name": "raid_bdev1", 00:13:32.606 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:32.606 "strip_size_kb": 0, 00:13:32.606 "state": "online", 00:13:32.606 "raid_level": "raid1", 00:13:32.606 "superblock": true, 00:13:32.606 "num_base_bdevs": 4, 00:13:32.606 "num_base_bdevs_discovered": 3, 00:13:32.606 "num_base_bdevs_operational": 3, 00:13:32.606 "process": { 00:13:32.606 "type": "rebuild", 00:13:32.606 "target": "spare", 00:13:32.606 "progress": { 00:13:32.606 "blocks": 16384, 00:13:32.606 "percent": 25 00:13:32.606 } 00:13:32.606 }, 00:13:32.606 "base_bdevs_list": [ 00:13:32.606 { 00:13:32.606 "name": "spare", 00:13:32.606 "uuid": "401fb972-1b48-5a29-b3c6-effa704d1013", 00:13:32.606 "is_configured": true, 00:13:32.606 "data_offset": 2048, 00:13:32.606 "data_size": 63488 00:13:32.606 }, 00:13:32.606 { 00:13:32.606 "name": null, 00:13:32.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.606 "is_configured": false, 00:13:32.606 "data_offset": 0, 00:13:32.606 "data_size": 63488 00:13:32.606 }, 00:13:32.606 { 00:13:32.606 "name": "BaseBdev3", 00:13:32.606 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:32.606 "is_configured": true, 00:13:32.606 "data_offset": 2048, 00:13:32.606 "data_size": 63488 00:13:32.606 }, 00:13:32.606 { 00:13:32.606 "name": "BaseBdev4", 00:13:32.606 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:32.606 "is_configured": true, 00:13:32.606 "data_offset": 2048, 00:13:32.606 "data_size": 63488 00:13:32.606 } 00:13:32.606 ] 00:13:32.606 }' 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.606 02:46:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:32.865 [2024-12-07 02:46:43.842852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:33.124 [2024-12-07 02:46:44.088776] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:33.384 [2024-12-07 02:46:44.430916] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:33.643 123.60 IOPS, 370.80 MiB/s [2024-12-07T02:46:44.721Z] 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:33.643 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.643 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.643 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.643 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.643 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.643 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.643 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.644 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.644 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.644 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.644 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.644 "name": "raid_bdev1", 00:13:33.644 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:33.644 "strip_size_kb": 0, 00:13:33.644 "state": "online", 00:13:33.644 "raid_level": "raid1", 00:13:33.644 "superblock": true, 00:13:33.644 "num_base_bdevs": 4, 00:13:33.644 "num_base_bdevs_discovered": 3, 00:13:33.644 "num_base_bdevs_operational": 3, 00:13:33.644 "process": { 00:13:33.644 "type": "rebuild", 00:13:33.644 "target": "spare", 00:13:33.644 "progress": { 00:13:33.644 "blocks": 32768, 00:13:33.644 "percent": 51 00:13:33.644 } 00:13:33.644 }, 00:13:33.644 "base_bdevs_list": [ 00:13:33.644 { 00:13:33.644 "name": "spare", 00:13:33.644 "uuid": "401fb972-1b48-5a29-b3c6-effa704d1013", 00:13:33.644 "is_configured": true, 00:13:33.644 "data_offset": 2048, 00:13:33.644 "data_size": 63488 00:13:33.644 }, 00:13:33.644 { 00:13:33.644 "name": null, 00:13:33.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.644 "is_configured": false, 00:13:33.644 "data_offset": 0, 00:13:33.644 "data_size": 63488 00:13:33.644 }, 00:13:33.644 { 00:13:33.644 "name": "BaseBdev3", 00:13:33.644 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:33.644 "is_configured": true, 00:13:33.644 "data_offset": 2048, 00:13:33.644 "data_size": 63488 00:13:33.644 }, 00:13:33.644 { 00:13:33.644 "name": "BaseBdev4", 00:13:33.644 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:33.644 "is_configured": true, 00:13:33.644 "data_offset": 2048, 00:13:33.644 "data_size": 63488 00:13:33.644 } 00:13:33.644 ] 00:13:33.644 }' 00:13:33.644 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.644 [2024-12-07 02:46:44.664024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:33.644 [2024-12-07 02:46:44.664762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:33.644 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.644 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.903 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.903 02:46:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:34.171 [2024-12-07 02:46:44.981706] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:34.171 [2024-12-07 02:46:45.206008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:34.691 109.67 IOPS, 329.00 MiB/s [2024-12-07T02:46:45.769Z] [2024-12-07 02:46:45.532575] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:34.691 [2024-12-07 02:46:45.642306] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:34.691 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:34.691 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.691 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.691 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.691 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.691 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.691 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.691 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.691 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.691 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.691 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.951 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.951 "name": "raid_bdev1", 00:13:34.951 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:34.951 "strip_size_kb": 0, 00:13:34.951 "state": "online", 00:13:34.951 "raid_level": "raid1", 00:13:34.951 "superblock": true, 00:13:34.951 "num_base_bdevs": 4, 00:13:34.951 "num_base_bdevs_discovered": 3, 00:13:34.951 "num_base_bdevs_operational": 3, 00:13:34.951 "process": { 00:13:34.951 "type": "rebuild", 00:13:34.951 "target": "spare", 00:13:34.951 "progress": { 00:13:34.951 "blocks": 47104, 00:13:34.951 "percent": 74 00:13:34.951 } 00:13:34.951 }, 00:13:34.951 "base_bdevs_list": [ 00:13:34.951 { 00:13:34.951 "name": "spare", 00:13:34.951 "uuid": "401fb972-1b48-5a29-b3c6-effa704d1013", 00:13:34.951 "is_configured": true, 00:13:34.951 "data_offset": 2048, 00:13:34.951 "data_size": 63488 00:13:34.951 }, 00:13:34.951 { 00:13:34.951 "name": null, 00:13:34.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.951 "is_configured": false, 00:13:34.951 "data_offset": 0, 00:13:34.951 "data_size": 63488 00:13:34.951 }, 00:13:34.951 { 00:13:34.951 "name": "BaseBdev3", 00:13:34.951 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:34.951 "is_configured": true, 00:13:34.951 "data_offset": 2048, 00:13:34.951 "data_size": 63488 00:13:34.951 }, 00:13:34.951 { 00:13:34.951 "name": "BaseBdev4", 00:13:34.951 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:34.951 "is_configured": true, 00:13:34.951 "data_offset": 2048, 00:13:34.951 "data_size": 63488 00:13:34.951 } 00:13:34.951 ] 00:13:34.951 }' 00:13:34.951 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.951 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.951 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.951 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.951 02:46:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:35.518 [2024-12-07 02:46:46.308062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:35.777 98.43 IOPS, 295.29 MiB/s [2024-12-07T02:46:46.855Z] [2024-12-07 02:46:46.634701] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:35.777 [2024-12-07 02:46:46.734537] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:35.777 [2024-12-07 02:46:46.736362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.777 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.777 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.777 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.777 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.777 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.777 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.036 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.036 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.036 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.036 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.036 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.036 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.036 "name": "raid_bdev1", 00:13:36.036 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:36.036 "strip_size_kb": 0, 00:13:36.036 "state": "online", 00:13:36.036 "raid_level": "raid1", 00:13:36.036 "superblock": true, 00:13:36.036 "num_base_bdevs": 4, 00:13:36.036 "num_base_bdevs_discovered": 3, 00:13:36.036 "num_base_bdevs_operational": 3, 00:13:36.036 "base_bdevs_list": [ 00:13:36.036 { 00:13:36.036 "name": "spare", 00:13:36.036 "uuid": "401fb972-1b48-5a29-b3c6-effa704d1013", 00:13:36.036 "is_configured": true, 00:13:36.036 "data_offset": 2048, 00:13:36.036 "data_size": 63488 00:13:36.036 }, 00:13:36.036 { 00:13:36.036 "name": null, 00:13:36.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.036 "is_configured": false, 00:13:36.036 "data_offset": 0, 00:13:36.036 "data_size": 63488 00:13:36.036 }, 00:13:36.036 { 00:13:36.036 "name": "BaseBdev3", 00:13:36.036 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:36.036 "is_configured": true, 00:13:36.036 "data_offset": 2048, 00:13:36.036 "data_size": 63488 00:13:36.036 }, 00:13:36.036 { 00:13:36.036 "name": "BaseBdev4", 00:13:36.036 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:36.036 "is_configured": true, 00:13:36.036 "data_offset": 2048, 00:13:36.036 "data_size": 63488 00:13:36.036 } 00:13:36.036 ] 00:13:36.036 }' 00:13:36.036 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.036 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:36.036 02:46:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.036 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.037 "name": "raid_bdev1", 00:13:36.037 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:36.037 "strip_size_kb": 0, 00:13:36.037 "state": "online", 00:13:36.037 "raid_level": "raid1", 00:13:36.037 "superblock": true, 00:13:36.037 "num_base_bdevs": 4, 00:13:36.037 "num_base_bdevs_discovered": 3, 00:13:36.037 "num_base_bdevs_operational": 3, 00:13:36.037 "base_bdevs_list": [ 00:13:36.037 { 00:13:36.037 "name": "spare", 00:13:36.037 "uuid": "401fb972-1b48-5a29-b3c6-effa704d1013", 00:13:36.037 "is_configured": true, 00:13:36.037 "data_offset": 2048, 00:13:36.037 "data_size": 63488 00:13:36.037 }, 00:13:36.037 { 00:13:36.037 "name": null, 00:13:36.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.037 "is_configured": false, 00:13:36.037 "data_offset": 0, 00:13:36.037 "data_size": 63488 00:13:36.037 }, 00:13:36.037 { 00:13:36.037 "name": "BaseBdev3", 00:13:36.037 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:36.037 "is_configured": true, 00:13:36.037 "data_offset": 2048, 00:13:36.037 "data_size": 63488 00:13:36.037 }, 00:13:36.037 { 00:13:36.037 "name": "BaseBdev4", 00:13:36.037 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:36.037 "is_configured": true, 00:13:36.037 "data_offset": 2048, 00:13:36.037 "data_size": 63488 00:13:36.037 } 00:13:36.037 ] 00:13:36.037 }' 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:36.037 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.296 "name": "raid_bdev1", 00:13:36.296 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:36.296 "strip_size_kb": 0, 00:13:36.296 "state": "online", 00:13:36.296 "raid_level": "raid1", 00:13:36.296 "superblock": true, 00:13:36.296 "num_base_bdevs": 4, 00:13:36.296 "num_base_bdevs_discovered": 3, 00:13:36.296 "num_base_bdevs_operational": 3, 00:13:36.296 "base_bdevs_list": [ 00:13:36.296 { 00:13:36.296 "name": "spare", 00:13:36.296 "uuid": "401fb972-1b48-5a29-b3c6-effa704d1013", 00:13:36.296 "is_configured": true, 00:13:36.296 "data_offset": 2048, 00:13:36.296 "data_size": 63488 00:13:36.296 }, 00:13:36.296 { 00:13:36.296 "name": null, 00:13:36.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.296 "is_configured": false, 00:13:36.296 "data_offset": 0, 00:13:36.296 "data_size": 63488 00:13:36.296 }, 00:13:36.296 { 00:13:36.296 "name": "BaseBdev3", 00:13:36.296 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:36.296 "is_configured": true, 00:13:36.296 "data_offset": 2048, 00:13:36.296 "data_size": 63488 00:13:36.296 }, 00:13:36.296 { 00:13:36.296 "name": "BaseBdev4", 00:13:36.296 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:36.296 "is_configured": true, 00:13:36.296 "data_offset": 2048, 00:13:36.296 "data_size": 63488 00:13:36.296 } 00:13:36.296 ] 00:13:36.296 }' 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.296 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.555 90.50 IOPS, 271.50 MiB/s [2024-12-07T02:46:47.633Z] 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:36.555 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.555 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.555 [2024-12-07 02:46:47.588013] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.555 [2024-12-07 02:46:47.588052] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.555 00:13:36.555 Latency(us) 00:13:36.555 [2024-12-07T02:46:47.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.555 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:36.555 raid_bdev1 : 8.17 89.82 269.45 0.00 0.00 15124.27 273.66 119052.30 00:13:36.555 [2024-12-07T02:46:47.633Z] =================================================================================================================== 00:13:36.555 [2024-12-07T02:46:47.633Z] Total : 89.82 269.45 0.00 0.00 15124.27 273.66 119052.30 00:13:36.555 [2024-12-07 02:46:47.623340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.555 [2024-12-07 02:46:47.623391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.555 [2024-12-07 02:46:47.623513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.555 [2024-12-07 02:46:47.623528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:13:36.555 { 00:13:36.555 "results": [ 00:13:36.555 { 00:13:36.555 "job": "raid_bdev1", 00:13:36.555 "core_mask": "0x1", 00:13:36.555 "workload": "randrw", 00:13:36.555 "percentage": 50, 00:13:36.555 "status": "finished", 00:13:36.555 "queue_depth": 2, 00:13:36.555 "io_size": 3145728, 00:13:36.555 "runtime": 8.172288, 00:13:36.555 "iops": 89.81572847163487, 00:13:36.555 "mibps": 269.4471854149046, 00:13:36.555 "io_failed": 0, 00:13:36.555 "io_timeout": 0, 00:13:36.555 "avg_latency_us": 15124.266653974752, 00:13:36.555 "min_latency_us": 273.6628820960699, 00:13:36.555 "max_latency_us": 119052.29694323144 00:13:36.555 } 00:13:36.555 ], 00:13:36.555 "core_count": 1 00:13:36.555 } 00:13:36.555 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.555 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.555 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:36.555 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.555 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.814 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:36.815 /dev/nbd0 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.075 1+0 records in 00:13:37.075 1+0 records out 00:13:37.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445027 s, 9.2 MB/s 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.075 02:46:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:37.075 /dev/nbd1 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.335 1+0 records in 00:13:37.335 1+0 records out 00:13:37.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483116 s, 8.5 MB/s 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.335 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.594 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:37.852 /dev/nbd1 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.852 1+0 records in 00:13:37.852 1+0 records out 00:13:37.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279312 s, 14.7 MB/s 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.852 02:46:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:38.110 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:38.110 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:38.110 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:38.110 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.110 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.110 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:38.110 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:38.110 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.111 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:38.111 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:38.111 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:38.111 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.111 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:38.111 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.111 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:38.369 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.369 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.369 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.369 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.369 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.369 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.369 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:38.369 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.369 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:38.369 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.370 [2024-12-07 02:46:49.249605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:38.370 [2024-12-07 02:46:49.249668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.370 [2024-12-07 02:46:49.249691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:38.370 [2024-12-07 02:46:49.249702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.370 [2024-12-07 02:46:49.252199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.370 [2024-12-07 02:46:49.252238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:38.370 [2024-12-07 02:46:49.252329] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:38.370 [2024-12-07 02:46:49.252385] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:38.370 [2024-12-07 02:46:49.252518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.370 [2024-12-07 02:46:49.252675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:38.370 spare 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.370 [2024-12-07 02:46:49.352575] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:13:38.370 [2024-12-07 02:46:49.352621] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.370 [2024-12-07 02:46:49.352927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000036fc0 00:13:38.370 [2024-12-07 02:46:49.353098] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:13:38.370 [2024-12-07 02:46:49.353115] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:13:38.370 [2024-12-07 02:46:49.353253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.370 "name": "raid_bdev1", 00:13:38.370 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:38.370 "strip_size_kb": 0, 00:13:38.370 "state": "online", 00:13:38.370 "raid_level": "raid1", 00:13:38.370 "superblock": true, 00:13:38.370 "num_base_bdevs": 4, 00:13:38.370 "num_base_bdevs_discovered": 3, 00:13:38.370 "num_base_bdevs_operational": 3, 00:13:38.370 "base_bdevs_list": [ 00:13:38.370 { 00:13:38.370 "name": "spare", 00:13:38.370 "uuid": "401fb972-1b48-5a29-b3c6-effa704d1013", 00:13:38.370 "is_configured": true, 00:13:38.370 "data_offset": 2048, 00:13:38.370 "data_size": 63488 00:13:38.370 }, 00:13:38.370 { 00:13:38.370 "name": null, 00:13:38.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.370 "is_configured": false, 00:13:38.370 "data_offset": 2048, 00:13:38.370 "data_size": 63488 00:13:38.370 }, 00:13:38.370 { 00:13:38.370 "name": "BaseBdev3", 00:13:38.370 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:38.370 "is_configured": true, 00:13:38.370 "data_offset": 2048, 00:13:38.370 "data_size": 63488 00:13:38.370 }, 00:13:38.370 { 00:13:38.370 "name": "BaseBdev4", 00:13:38.370 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:38.370 "is_configured": true, 00:13:38.370 "data_offset": 2048, 00:13:38.370 "data_size": 63488 00:13:38.370 } 00:13:38.370 ] 00:13:38.370 }' 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.370 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.938 "name": "raid_bdev1", 00:13:38.938 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:38.938 "strip_size_kb": 0, 00:13:38.938 "state": "online", 00:13:38.938 "raid_level": "raid1", 00:13:38.938 "superblock": true, 00:13:38.938 "num_base_bdevs": 4, 00:13:38.938 "num_base_bdevs_discovered": 3, 00:13:38.938 "num_base_bdevs_operational": 3, 00:13:38.938 "base_bdevs_list": [ 00:13:38.938 { 00:13:38.938 "name": "spare", 00:13:38.938 "uuid": "401fb972-1b48-5a29-b3c6-effa704d1013", 00:13:38.938 "is_configured": true, 00:13:38.938 "data_offset": 2048, 00:13:38.938 "data_size": 63488 00:13:38.938 }, 00:13:38.938 { 00:13:38.938 "name": null, 00:13:38.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.938 "is_configured": false, 00:13:38.938 "data_offset": 2048, 00:13:38.938 "data_size": 63488 00:13:38.938 }, 00:13:38.938 { 00:13:38.938 "name": "BaseBdev3", 00:13:38.938 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:38.938 "is_configured": true, 00:13:38.938 "data_offset": 2048, 00:13:38.938 "data_size": 63488 00:13:38.938 }, 00:13:38.938 { 00:13:38.938 "name": "BaseBdev4", 00:13:38.938 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:38.938 "is_configured": true, 00:13:38.938 "data_offset": 2048, 00:13:38.938 "data_size": 63488 00:13:38.938 } 00:13:38.938 ] 00:13:38.938 }' 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.938 [2024-12-07 02:46:49.972450] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.938 02:46:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.197 02:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.197 "name": "raid_bdev1", 00:13:39.197 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:39.197 "strip_size_kb": 0, 00:13:39.197 "state": "online", 00:13:39.197 "raid_level": "raid1", 00:13:39.197 "superblock": true, 00:13:39.197 "num_base_bdevs": 4, 00:13:39.197 "num_base_bdevs_discovered": 2, 00:13:39.197 "num_base_bdevs_operational": 2, 00:13:39.197 "base_bdevs_list": [ 00:13:39.197 { 00:13:39.197 "name": null, 00:13:39.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.197 "is_configured": false, 00:13:39.197 "data_offset": 0, 00:13:39.197 "data_size": 63488 00:13:39.197 }, 00:13:39.197 { 00:13:39.197 "name": null, 00:13:39.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.197 "is_configured": false, 00:13:39.197 "data_offset": 2048, 00:13:39.197 "data_size": 63488 00:13:39.197 }, 00:13:39.197 { 00:13:39.197 "name": "BaseBdev3", 00:13:39.197 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:39.197 "is_configured": true, 00:13:39.197 "data_offset": 2048, 00:13:39.197 "data_size": 63488 00:13:39.197 }, 00:13:39.197 { 00:13:39.197 "name": "BaseBdev4", 00:13:39.197 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:39.197 "is_configured": true, 00:13:39.197 "data_offset": 2048, 00:13:39.197 "data_size": 63488 00:13:39.197 } 00:13:39.197 ] 00:13:39.197 }' 00:13:39.197 02:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.197 02:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.456 02:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:39.456 02:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.456 02:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.456 [2024-12-07 02:46:50.471768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.456 [2024-12-07 02:46:50.471903] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:39.456 [2024-12-07 02:46:50.471920] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:39.456 [2024-12-07 02:46:50.471951] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.456 [2024-12-07 02:46:50.478254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037090 00:13:39.456 02:46:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.456 02:46:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:39.456 [2024-12-07 02:46:50.480331] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.832 "name": "raid_bdev1", 00:13:40.832 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:40.832 "strip_size_kb": 0, 00:13:40.832 "state": "online", 00:13:40.832 "raid_level": "raid1", 00:13:40.832 "superblock": true, 00:13:40.832 "num_base_bdevs": 4, 00:13:40.832 "num_base_bdevs_discovered": 3, 00:13:40.832 "num_base_bdevs_operational": 3, 00:13:40.832 "process": { 00:13:40.832 "type": "rebuild", 00:13:40.832 "target": "spare", 00:13:40.832 "progress": { 00:13:40.832 "blocks": 20480, 00:13:40.832 "percent": 32 00:13:40.832 } 00:13:40.832 }, 00:13:40.832 "base_bdevs_list": [ 00:13:40.832 { 00:13:40.832 "name": "spare", 00:13:40.832 "uuid": "401fb972-1b48-5a29-b3c6-effa704d1013", 00:13:40.832 "is_configured": true, 00:13:40.832 "data_offset": 2048, 00:13:40.832 "data_size": 63488 00:13:40.832 }, 00:13:40.832 { 00:13:40.832 "name": null, 00:13:40.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.832 "is_configured": false, 00:13:40.832 "data_offset": 2048, 00:13:40.832 "data_size": 63488 00:13:40.832 }, 00:13:40.832 { 00:13:40.832 "name": "BaseBdev3", 00:13:40.832 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:40.832 "is_configured": true, 00:13:40.832 "data_offset": 2048, 00:13:40.832 "data_size": 63488 00:13:40.832 }, 00:13:40.832 { 00:13:40.832 "name": "BaseBdev4", 00:13:40.832 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:40.832 "is_configured": true, 00:13:40.832 "data_offset": 2048, 00:13:40.832 "data_size": 63488 00:13:40.832 } 00:13:40.832 ] 00:13:40.832 }' 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:40.832 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.833 [2024-12-07 02:46:51.644171] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:40.833 [2024-12-07 02:46:51.687602] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:40.833 [2024-12-07 02:46:51.687663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.833 [2024-12-07 02:46:51.687678] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:40.833 [2024-12-07 02:46:51.687688] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.833 "name": "raid_bdev1", 00:13:40.833 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:40.833 "strip_size_kb": 0, 00:13:40.833 "state": "online", 00:13:40.833 "raid_level": "raid1", 00:13:40.833 "superblock": true, 00:13:40.833 "num_base_bdevs": 4, 00:13:40.833 "num_base_bdevs_discovered": 2, 00:13:40.833 "num_base_bdevs_operational": 2, 00:13:40.833 "base_bdevs_list": [ 00:13:40.833 { 00:13:40.833 "name": null, 00:13:40.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.833 "is_configured": false, 00:13:40.833 "data_offset": 0, 00:13:40.833 "data_size": 63488 00:13:40.833 }, 00:13:40.833 { 00:13:40.833 "name": null, 00:13:40.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.833 "is_configured": false, 00:13:40.833 "data_offset": 2048, 00:13:40.833 "data_size": 63488 00:13:40.833 }, 00:13:40.833 { 00:13:40.833 "name": "BaseBdev3", 00:13:40.833 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:40.833 "is_configured": true, 00:13:40.833 "data_offset": 2048, 00:13:40.833 "data_size": 63488 00:13:40.833 }, 00:13:40.833 { 00:13:40.833 "name": "BaseBdev4", 00:13:40.833 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:40.833 "is_configured": true, 00:13:40.833 "data_offset": 2048, 00:13:40.833 "data_size": 63488 00:13:40.833 } 00:13:40.833 ] 00:13:40.833 }' 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.833 02:46:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.092 02:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:41.092 02:46:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.092 02:46:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.092 [2024-12-07 02:46:52.140829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:41.092 [2024-12-07 02:46:52.140885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.092 [2024-12-07 02:46:52.140909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:41.092 [2024-12-07 02:46:52.140921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.092 [2024-12-07 02:46:52.141395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.092 [2024-12-07 02:46:52.141426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:41.092 [2024-12-07 02:46:52.141504] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:41.092 [2024-12-07 02:46:52.141521] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:41.092 [2024-12-07 02:46:52.141531] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:41.092 [2024-12-07 02:46:52.141562] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:41.092 [2024-12-07 02:46:52.146653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:13:41.092 spare 00:13:41.092 02:46:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.092 02:46:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:41.092 [2024-12-07 02:46:52.148853] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.471 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.471 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.471 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.471 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.471 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.471 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.471 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.471 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.471 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.471 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.471 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.471 "name": "raid_bdev1", 00:13:42.471 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:42.471 "strip_size_kb": 0, 00:13:42.471 "state": "online", 00:13:42.471 "raid_level": "raid1", 00:13:42.471 "superblock": true, 00:13:42.471 "num_base_bdevs": 4, 00:13:42.471 "num_base_bdevs_discovered": 3, 00:13:42.471 "num_base_bdevs_operational": 3, 00:13:42.471 "process": { 00:13:42.471 "type": "rebuild", 00:13:42.471 "target": "spare", 00:13:42.471 "progress": { 00:13:42.471 "blocks": 20480, 00:13:42.471 "percent": 32 00:13:42.471 } 00:13:42.471 }, 00:13:42.471 "base_bdevs_list": [ 00:13:42.471 { 00:13:42.471 "name": "spare", 00:13:42.471 "uuid": "401fb972-1b48-5a29-b3c6-effa704d1013", 00:13:42.471 "is_configured": true, 00:13:42.471 "data_offset": 2048, 00:13:42.471 "data_size": 63488 00:13:42.471 }, 00:13:42.471 { 00:13:42.471 "name": null, 00:13:42.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.471 "is_configured": false, 00:13:42.471 "data_offset": 2048, 00:13:42.471 "data_size": 63488 00:13:42.471 }, 00:13:42.471 { 00:13:42.471 "name": "BaseBdev3", 00:13:42.471 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:42.471 "is_configured": true, 00:13:42.471 "data_offset": 2048, 00:13:42.471 "data_size": 63488 00:13:42.471 }, 00:13:42.471 { 00:13:42.472 "name": "BaseBdev4", 00:13:42.472 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:42.472 "is_configured": true, 00:13:42.472 "data_offset": 2048, 00:13:42.472 "data_size": 63488 00:13:42.472 } 00:13:42.472 ] 00:13:42.472 }' 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.472 [2024-12-07 02:46:53.288787] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:42.472 [2024-12-07 02:46:53.356340] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:42.472 [2024-12-07 02:46:53.356391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.472 [2024-12-07 02:46:53.356410] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:42.472 [2024-12-07 02:46:53.356418] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.472 "name": "raid_bdev1", 00:13:42.472 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:42.472 "strip_size_kb": 0, 00:13:42.472 "state": "online", 00:13:42.472 "raid_level": "raid1", 00:13:42.472 "superblock": true, 00:13:42.472 "num_base_bdevs": 4, 00:13:42.472 "num_base_bdevs_discovered": 2, 00:13:42.472 "num_base_bdevs_operational": 2, 00:13:42.472 "base_bdevs_list": [ 00:13:42.472 { 00:13:42.472 "name": null, 00:13:42.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.472 "is_configured": false, 00:13:42.472 "data_offset": 0, 00:13:42.472 "data_size": 63488 00:13:42.472 }, 00:13:42.472 { 00:13:42.472 "name": null, 00:13:42.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.472 "is_configured": false, 00:13:42.472 "data_offset": 2048, 00:13:42.472 "data_size": 63488 00:13:42.472 }, 00:13:42.472 { 00:13:42.472 "name": "BaseBdev3", 00:13:42.472 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:42.472 "is_configured": true, 00:13:42.472 "data_offset": 2048, 00:13:42.472 "data_size": 63488 00:13:42.472 }, 00:13:42.472 { 00:13:42.472 "name": "BaseBdev4", 00:13:42.472 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:42.472 "is_configured": true, 00:13:42.472 "data_offset": 2048, 00:13:42.472 "data_size": 63488 00:13:42.472 } 00:13:42.472 ] 00:13:42.472 }' 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.472 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.733 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.733 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.733 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.733 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.733 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.733 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.733 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.733 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.733 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.993 "name": "raid_bdev1", 00:13:42.993 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:42.993 "strip_size_kb": 0, 00:13:42.993 "state": "online", 00:13:42.993 "raid_level": "raid1", 00:13:42.993 "superblock": true, 00:13:42.993 "num_base_bdevs": 4, 00:13:42.993 "num_base_bdevs_discovered": 2, 00:13:42.993 "num_base_bdevs_operational": 2, 00:13:42.993 "base_bdevs_list": [ 00:13:42.993 { 00:13:42.993 "name": null, 00:13:42.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.993 "is_configured": false, 00:13:42.993 "data_offset": 0, 00:13:42.993 "data_size": 63488 00:13:42.993 }, 00:13:42.993 { 00:13:42.993 "name": null, 00:13:42.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.993 "is_configured": false, 00:13:42.993 "data_offset": 2048, 00:13:42.993 "data_size": 63488 00:13:42.993 }, 00:13:42.993 { 00:13:42.993 "name": "BaseBdev3", 00:13:42.993 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:42.993 "is_configured": true, 00:13:42.993 "data_offset": 2048, 00:13:42.993 "data_size": 63488 00:13:42.993 }, 00:13:42.993 { 00:13:42.993 "name": "BaseBdev4", 00:13:42.993 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:42.993 "is_configured": true, 00:13:42.993 "data_offset": 2048, 00:13:42.993 "data_size": 63488 00:13:42.993 } 00:13:42.993 ] 00:13:42.993 }' 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.993 [2024-12-07 02:46:53.969441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:42.993 [2024-12-07 02:46:53.969489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.993 [2024-12-07 02:46:53.969511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:13:42.993 [2024-12-07 02:46:53.969520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.993 [2024-12-07 02:46:53.969997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.993 [2024-12-07 02:46:53.970022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:42.993 [2024-12-07 02:46:53.970096] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:42.993 [2024-12-07 02:46:53.970112] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:42.993 [2024-12-07 02:46:53.970134] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:42.993 [2024-12-07 02:46:53.970149] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:42.993 BaseBdev1 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.993 02:46:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:43.934 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.934 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.934 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.934 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.934 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.934 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.934 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.934 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.935 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.935 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.935 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.935 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.935 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.935 02:46:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.935 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.208 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.208 "name": "raid_bdev1", 00:13:44.208 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:44.208 "strip_size_kb": 0, 00:13:44.208 "state": "online", 00:13:44.208 "raid_level": "raid1", 00:13:44.208 "superblock": true, 00:13:44.208 "num_base_bdevs": 4, 00:13:44.208 "num_base_bdevs_discovered": 2, 00:13:44.208 "num_base_bdevs_operational": 2, 00:13:44.208 "base_bdevs_list": [ 00:13:44.208 { 00:13:44.208 "name": null, 00:13:44.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.208 "is_configured": false, 00:13:44.208 "data_offset": 0, 00:13:44.208 "data_size": 63488 00:13:44.208 }, 00:13:44.208 { 00:13:44.208 "name": null, 00:13:44.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.208 "is_configured": false, 00:13:44.208 "data_offset": 2048, 00:13:44.208 "data_size": 63488 00:13:44.208 }, 00:13:44.208 { 00:13:44.208 "name": "BaseBdev3", 00:13:44.208 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:44.208 "is_configured": true, 00:13:44.208 "data_offset": 2048, 00:13:44.208 "data_size": 63488 00:13:44.208 }, 00:13:44.208 { 00:13:44.208 "name": "BaseBdev4", 00:13:44.208 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:44.208 "is_configured": true, 00:13:44.208 "data_offset": 2048, 00:13:44.208 "data_size": 63488 00:13:44.208 } 00:13:44.208 ] 00:13:44.208 }' 00:13:44.208 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.208 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.499 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.499 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.499 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.499 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.499 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.499 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.499 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.499 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.499 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.499 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.499 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.499 "name": "raid_bdev1", 00:13:44.499 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:44.499 "strip_size_kb": 0, 00:13:44.499 "state": "online", 00:13:44.499 "raid_level": "raid1", 00:13:44.499 "superblock": true, 00:13:44.499 "num_base_bdevs": 4, 00:13:44.499 "num_base_bdevs_discovered": 2, 00:13:44.499 "num_base_bdevs_operational": 2, 00:13:44.499 "base_bdevs_list": [ 00:13:44.499 { 00:13:44.499 "name": null, 00:13:44.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.499 "is_configured": false, 00:13:44.499 "data_offset": 0, 00:13:44.499 "data_size": 63488 00:13:44.499 }, 00:13:44.499 { 00:13:44.499 "name": null, 00:13:44.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.499 "is_configured": false, 00:13:44.499 "data_offset": 2048, 00:13:44.499 "data_size": 63488 00:13:44.499 }, 00:13:44.499 { 00:13:44.499 "name": "BaseBdev3", 00:13:44.499 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:44.499 "is_configured": true, 00:13:44.499 "data_offset": 2048, 00:13:44.500 "data_size": 63488 00:13:44.500 }, 00:13:44.500 { 00:13:44.500 "name": "BaseBdev4", 00:13:44.500 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:44.500 "is_configured": true, 00:13:44.500 "data_offset": 2048, 00:13:44.500 "data_size": 63488 00:13:44.500 } 00:13:44.500 ] 00:13:44.500 }' 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.500 [2024-12-07 02:46:55.539013] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.500 [2024-12-07 02:46:55.539161] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:44.500 [2024-12-07 02:46:55.539188] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:44.500 request: 00:13:44.500 { 00:13:44.500 "base_bdev": "BaseBdev1", 00:13:44.500 "raid_bdev": "raid_bdev1", 00:13:44.500 "method": "bdev_raid_add_base_bdev", 00:13:44.500 "req_id": 1 00:13:44.500 } 00:13:44.500 Got JSON-RPC error response 00:13:44.500 response: 00:13:44.500 { 00:13:44.500 "code": -22, 00:13:44.500 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:44.500 } 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:44.500 02:46:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.880 "name": "raid_bdev1", 00:13:45.880 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:45.880 "strip_size_kb": 0, 00:13:45.880 "state": "online", 00:13:45.880 "raid_level": "raid1", 00:13:45.880 "superblock": true, 00:13:45.880 "num_base_bdevs": 4, 00:13:45.880 "num_base_bdevs_discovered": 2, 00:13:45.880 "num_base_bdevs_operational": 2, 00:13:45.880 "base_bdevs_list": [ 00:13:45.880 { 00:13:45.880 "name": null, 00:13:45.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.880 "is_configured": false, 00:13:45.880 "data_offset": 0, 00:13:45.880 "data_size": 63488 00:13:45.880 }, 00:13:45.880 { 00:13:45.880 "name": null, 00:13:45.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.880 "is_configured": false, 00:13:45.880 "data_offset": 2048, 00:13:45.880 "data_size": 63488 00:13:45.880 }, 00:13:45.880 { 00:13:45.880 "name": "BaseBdev3", 00:13:45.880 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:45.880 "is_configured": true, 00:13:45.880 "data_offset": 2048, 00:13:45.880 "data_size": 63488 00:13:45.880 }, 00:13:45.880 { 00:13:45.880 "name": "BaseBdev4", 00:13:45.880 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:45.880 "is_configured": true, 00:13:45.880 "data_offset": 2048, 00:13:45.880 "data_size": 63488 00:13:45.880 } 00:13:45.880 ] 00:13:45.880 }' 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.880 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.141 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.141 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.141 "name": "raid_bdev1", 00:13:46.141 "uuid": "b3378da9-c085-468f-9e66-46193ff78e41", 00:13:46.141 "strip_size_kb": 0, 00:13:46.141 "state": "online", 00:13:46.141 "raid_level": "raid1", 00:13:46.141 "superblock": true, 00:13:46.141 "num_base_bdevs": 4, 00:13:46.141 "num_base_bdevs_discovered": 2, 00:13:46.141 "num_base_bdevs_operational": 2, 00:13:46.141 "base_bdevs_list": [ 00:13:46.141 { 00:13:46.141 "name": null, 00:13:46.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.141 "is_configured": false, 00:13:46.141 "data_offset": 0, 00:13:46.141 "data_size": 63488 00:13:46.141 }, 00:13:46.141 { 00:13:46.141 "name": null, 00:13:46.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.141 "is_configured": false, 00:13:46.141 "data_offset": 2048, 00:13:46.141 "data_size": 63488 00:13:46.141 }, 00:13:46.141 { 00:13:46.141 "name": "BaseBdev3", 00:13:46.141 "uuid": "6092e17c-6722-532c-a00e-9a15335c42af", 00:13:46.141 "is_configured": true, 00:13:46.141 "data_offset": 2048, 00:13:46.141 "data_size": 63488 00:13:46.141 }, 00:13:46.141 { 00:13:46.141 "name": "BaseBdev4", 00:13:46.141 "uuid": "053660c1-c7ef-523b-8f58-43fd42c815e5", 00:13:46.141 "is_configured": true, 00:13:46.141 "data_offset": 2048, 00:13:46.141 "data_size": 63488 00:13:46.141 } 00:13:46.141 ] 00:13:46.141 }' 00:13:46.141 02:46:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89998 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89998 ']' 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89998 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89998 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:46.141 killing process with pid 89998 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89998' 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89998 00:13:46.141 Received shutdown signal, test time was about 17.674692 seconds 00:13:46.141 00:13:46.141 Latency(us) 00:13:46.141 [2024-12-07T02:46:57.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.141 [2024-12-07T02:46:57.219Z] =================================================================================================================== 00:13:46.141 [2024-12-07T02:46:57.219Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:46.141 [2024-12-07 02:46:57.103999] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:46.141 [2024-12-07 02:46:57.104126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.141 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89998 00:13:46.141 [2024-12-07 02:46:57.104201] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.141 [2024-12-07 02:46:57.104215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:13:46.141 [2024-12-07 02:46:57.188930] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:46.711 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:46.711 00:13:46.711 real 0m19.854s 00:13:46.711 user 0m26.090s 00:13:46.711 sys 0m2.728s 00:13:46.711 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.711 ************************************ 00:13:46.711 END TEST raid_rebuild_test_sb_io 00:13:46.711 ************************************ 00:13:46.711 02:46:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.711 02:46:57 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:46.711 02:46:57 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:13:46.711 02:46:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:46.711 02:46:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:46.711 02:46:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:46.711 ************************************ 00:13:46.711 START TEST raid5f_state_function_test 00:13:46.711 ************************************ 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90710 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90710' 00:13:46.711 Process raid pid: 90710 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90710 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90710 ']' 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:46.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:46.711 02:46:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.711 [2024-12-07 02:46:57.747574] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:46.711 [2024-12-07 02:46:57.747761] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.971 [2024-12-07 02:46:57.911683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.971 [2024-12-07 02:46:57.983158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.231 [2024-12-07 02:46:58.058375] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.231 [2024-12-07 02:46:58.058415] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.490 [2024-12-07 02:46:58.553328] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:47.490 [2024-12-07 02:46:58.553380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:47.490 [2024-12-07 02:46:58.553396] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:47.490 [2024-12-07 02:46:58.553406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:47.490 [2024-12-07 02:46:58.553412] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:47.490 [2024-12-07 02:46:58.553426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.490 02:46:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.751 02:46:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.751 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.751 "name": "Existed_Raid", 00:13:47.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.751 "strip_size_kb": 64, 00:13:47.751 "state": "configuring", 00:13:47.751 "raid_level": "raid5f", 00:13:47.751 "superblock": false, 00:13:47.751 "num_base_bdevs": 3, 00:13:47.751 "num_base_bdevs_discovered": 0, 00:13:47.751 "num_base_bdevs_operational": 3, 00:13:47.751 "base_bdevs_list": [ 00:13:47.751 { 00:13:47.751 "name": "BaseBdev1", 00:13:47.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.751 "is_configured": false, 00:13:47.751 "data_offset": 0, 00:13:47.751 "data_size": 0 00:13:47.751 }, 00:13:47.751 { 00:13:47.751 "name": "BaseBdev2", 00:13:47.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.751 "is_configured": false, 00:13:47.751 "data_offset": 0, 00:13:47.751 "data_size": 0 00:13:47.751 }, 00:13:47.751 { 00:13:47.751 "name": "BaseBdev3", 00:13:47.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.751 "is_configured": false, 00:13:47.751 "data_offset": 0, 00:13:47.751 "data_size": 0 00:13:47.751 } 00:13:47.751 ] 00:13:47.751 }' 00:13:47.751 02:46:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.751 02:46:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.011 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:48.011 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.011 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.011 [2024-12-07 02:46:59.032376] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.011 [2024-12-07 02:46:59.032419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:48.011 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.011 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:48.011 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.011 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.011 [2024-12-07 02:46:59.044389] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:48.011 [2024-12-07 02:46:59.044425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:48.011 [2024-12-07 02:46:59.044432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.011 [2024-12-07 02:46:59.044442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.011 [2024-12-07 02:46:59.044447] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:48.011 [2024-12-07 02:46:59.044456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:48.011 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.011 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.012 [2024-12-07 02:46:59.071078] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.012 BaseBdev1 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.012 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.271 [ 00:13:48.271 { 00:13:48.271 "name": "BaseBdev1", 00:13:48.271 "aliases": [ 00:13:48.271 "d9a563bd-7d46-44b5-bb96-d6dbbef6010a" 00:13:48.271 ], 00:13:48.271 "product_name": "Malloc disk", 00:13:48.271 "block_size": 512, 00:13:48.271 "num_blocks": 65536, 00:13:48.271 "uuid": "d9a563bd-7d46-44b5-bb96-d6dbbef6010a", 00:13:48.271 "assigned_rate_limits": { 00:13:48.271 "rw_ios_per_sec": 0, 00:13:48.271 "rw_mbytes_per_sec": 0, 00:13:48.271 "r_mbytes_per_sec": 0, 00:13:48.271 "w_mbytes_per_sec": 0 00:13:48.271 }, 00:13:48.271 "claimed": true, 00:13:48.271 "claim_type": "exclusive_write", 00:13:48.271 "zoned": false, 00:13:48.271 "supported_io_types": { 00:13:48.271 "read": true, 00:13:48.271 "write": true, 00:13:48.271 "unmap": true, 00:13:48.271 "flush": true, 00:13:48.271 "reset": true, 00:13:48.271 "nvme_admin": false, 00:13:48.271 "nvme_io": false, 00:13:48.271 "nvme_io_md": false, 00:13:48.271 "write_zeroes": true, 00:13:48.271 "zcopy": true, 00:13:48.271 "get_zone_info": false, 00:13:48.271 "zone_management": false, 00:13:48.271 "zone_append": false, 00:13:48.271 "compare": false, 00:13:48.271 "compare_and_write": false, 00:13:48.271 "abort": true, 00:13:48.271 "seek_hole": false, 00:13:48.271 "seek_data": false, 00:13:48.271 "copy": true, 00:13:48.271 "nvme_iov_md": false 00:13:48.271 }, 00:13:48.271 "memory_domains": [ 00:13:48.271 { 00:13:48.271 "dma_device_id": "system", 00:13:48.271 "dma_device_type": 1 00:13:48.271 }, 00:13:48.271 { 00:13:48.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.271 "dma_device_type": 2 00:13:48.271 } 00:13:48.271 ], 00:13:48.271 "driver_specific": {} 00:13:48.271 } 00:13:48.271 ] 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.271 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.272 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.272 "name": "Existed_Raid", 00:13:48.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.272 "strip_size_kb": 64, 00:13:48.272 "state": "configuring", 00:13:48.272 "raid_level": "raid5f", 00:13:48.272 "superblock": false, 00:13:48.272 "num_base_bdevs": 3, 00:13:48.272 "num_base_bdevs_discovered": 1, 00:13:48.272 "num_base_bdevs_operational": 3, 00:13:48.272 "base_bdevs_list": [ 00:13:48.272 { 00:13:48.272 "name": "BaseBdev1", 00:13:48.272 "uuid": "d9a563bd-7d46-44b5-bb96-d6dbbef6010a", 00:13:48.272 "is_configured": true, 00:13:48.272 "data_offset": 0, 00:13:48.272 "data_size": 65536 00:13:48.272 }, 00:13:48.272 { 00:13:48.272 "name": "BaseBdev2", 00:13:48.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.272 "is_configured": false, 00:13:48.272 "data_offset": 0, 00:13:48.272 "data_size": 0 00:13:48.272 }, 00:13:48.272 { 00:13:48.272 "name": "BaseBdev3", 00:13:48.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.272 "is_configured": false, 00:13:48.272 "data_offset": 0, 00:13:48.272 "data_size": 0 00:13:48.272 } 00:13:48.272 ] 00:13:48.272 }' 00:13:48.272 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.272 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.531 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:48.531 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.531 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.531 [2024-12-07 02:46:59.566225] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.531 [2024-12-07 02:46:59.566262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:48.531 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.531 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:48.531 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.531 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.531 [2024-12-07 02:46:59.574251] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.531 [2024-12-07 02:46:59.576326] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.531 [2024-12-07 02:46:59.576362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.531 [2024-12-07 02:46:59.576371] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:48.531 [2024-12-07 02:46:59.576380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:48.531 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.531 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:48.531 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.532 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.791 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.791 "name": "Existed_Raid", 00:13:48.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.791 "strip_size_kb": 64, 00:13:48.791 "state": "configuring", 00:13:48.791 "raid_level": "raid5f", 00:13:48.791 "superblock": false, 00:13:48.791 "num_base_bdevs": 3, 00:13:48.791 "num_base_bdevs_discovered": 1, 00:13:48.791 "num_base_bdevs_operational": 3, 00:13:48.791 "base_bdevs_list": [ 00:13:48.791 { 00:13:48.791 "name": "BaseBdev1", 00:13:48.791 "uuid": "d9a563bd-7d46-44b5-bb96-d6dbbef6010a", 00:13:48.791 "is_configured": true, 00:13:48.791 "data_offset": 0, 00:13:48.791 "data_size": 65536 00:13:48.791 }, 00:13:48.791 { 00:13:48.791 "name": "BaseBdev2", 00:13:48.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.791 "is_configured": false, 00:13:48.791 "data_offset": 0, 00:13:48.791 "data_size": 0 00:13:48.791 }, 00:13:48.791 { 00:13:48.791 "name": "BaseBdev3", 00:13:48.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.791 "is_configured": false, 00:13:48.791 "data_offset": 0, 00:13:48.791 "data_size": 0 00:13:48.791 } 00:13:48.791 ] 00:13:48.791 }' 00:13:48.791 02:46:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.791 02:46:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.050 [2024-12-07 02:47:00.061666] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.050 BaseBdev2 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.050 [ 00:13:49.050 { 00:13:49.050 "name": "BaseBdev2", 00:13:49.050 "aliases": [ 00:13:49.050 "999fb9ea-5f94-4005-981e-0b38440068f6" 00:13:49.050 ], 00:13:49.050 "product_name": "Malloc disk", 00:13:49.050 "block_size": 512, 00:13:49.050 "num_blocks": 65536, 00:13:49.050 "uuid": "999fb9ea-5f94-4005-981e-0b38440068f6", 00:13:49.050 "assigned_rate_limits": { 00:13:49.050 "rw_ios_per_sec": 0, 00:13:49.050 "rw_mbytes_per_sec": 0, 00:13:49.050 "r_mbytes_per_sec": 0, 00:13:49.050 "w_mbytes_per_sec": 0 00:13:49.050 }, 00:13:49.050 "claimed": true, 00:13:49.050 "claim_type": "exclusive_write", 00:13:49.050 "zoned": false, 00:13:49.050 "supported_io_types": { 00:13:49.050 "read": true, 00:13:49.050 "write": true, 00:13:49.050 "unmap": true, 00:13:49.050 "flush": true, 00:13:49.050 "reset": true, 00:13:49.050 "nvme_admin": false, 00:13:49.050 "nvme_io": false, 00:13:49.050 "nvme_io_md": false, 00:13:49.050 "write_zeroes": true, 00:13:49.050 "zcopy": true, 00:13:49.050 "get_zone_info": false, 00:13:49.050 "zone_management": false, 00:13:49.050 "zone_append": false, 00:13:49.050 "compare": false, 00:13:49.050 "compare_and_write": false, 00:13:49.050 "abort": true, 00:13:49.050 "seek_hole": false, 00:13:49.050 "seek_data": false, 00:13:49.050 "copy": true, 00:13:49.050 "nvme_iov_md": false 00:13:49.050 }, 00:13:49.050 "memory_domains": [ 00:13:49.050 { 00:13:49.050 "dma_device_id": "system", 00:13:49.050 "dma_device_type": 1 00:13:49.050 }, 00:13:49.050 { 00:13:49.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.050 "dma_device_type": 2 00:13:49.050 } 00:13:49.050 ], 00:13:49.050 "driver_specific": {} 00:13:49.050 } 00:13:49.050 ] 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.050 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.311 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.311 "name": "Existed_Raid", 00:13:49.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.311 "strip_size_kb": 64, 00:13:49.311 "state": "configuring", 00:13:49.311 "raid_level": "raid5f", 00:13:49.311 "superblock": false, 00:13:49.311 "num_base_bdevs": 3, 00:13:49.311 "num_base_bdevs_discovered": 2, 00:13:49.311 "num_base_bdevs_operational": 3, 00:13:49.311 "base_bdevs_list": [ 00:13:49.311 { 00:13:49.311 "name": "BaseBdev1", 00:13:49.311 "uuid": "d9a563bd-7d46-44b5-bb96-d6dbbef6010a", 00:13:49.311 "is_configured": true, 00:13:49.311 "data_offset": 0, 00:13:49.311 "data_size": 65536 00:13:49.311 }, 00:13:49.311 { 00:13:49.311 "name": "BaseBdev2", 00:13:49.311 "uuid": "999fb9ea-5f94-4005-981e-0b38440068f6", 00:13:49.311 "is_configured": true, 00:13:49.311 "data_offset": 0, 00:13:49.311 "data_size": 65536 00:13:49.311 }, 00:13:49.311 { 00:13:49.311 "name": "BaseBdev3", 00:13:49.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.311 "is_configured": false, 00:13:49.311 "data_offset": 0, 00:13:49.311 "data_size": 0 00:13:49.311 } 00:13:49.311 ] 00:13:49.311 }' 00:13:49.311 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.311 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.571 [2024-12-07 02:47:00.565330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.571 [2024-12-07 02:47:00.565405] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:49.571 [2024-12-07 02:47:00.565419] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:49.571 [2024-12-07 02:47:00.565772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:49.571 [2024-12-07 02:47:00.566292] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:49.571 [2024-12-07 02:47:00.566315] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:49.571 [2024-12-07 02:47:00.566540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.571 BaseBdev3 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.571 [ 00:13:49.571 { 00:13:49.571 "name": "BaseBdev3", 00:13:49.571 "aliases": [ 00:13:49.571 "d5a465d0-989e-4962-8b28-c7d8ac206d23" 00:13:49.571 ], 00:13:49.571 "product_name": "Malloc disk", 00:13:49.571 "block_size": 512, 00:13:49.571 "num_blocks": 65536, 00:13:49.571 "uuid": "d5a465d0-989e-4962-8b28-c7d8ac206d23", 00:13:49.571 "assigned_rate_limits": { 00:13:49.571 "rw_ios_per_sec": 0, 00:13:49.571 "rw_mbytes_per_sec": 0, 00:13:49.571 "r_mbytes_per_sec": 0, 00:13:49.571 "w_mbytes_per_sec": 0 00:13:49.571 }, 00:13:49.571 "claimed": true, 00:13:49.571 "claim_type": "exclusive_write", 00:13:49.571 "zoned": false, 00:13:49.571 "supported_io_types": { 00:13:49.571 "read": true, 00:13:49.571 "write": true, 00:13:49.571 "unmap": true, 00:13:49.571 "flush": true, 00:13:49.571 "reset": true, 00:13:49.571 "nvme_admin": false, 00:13:49.571 "nvme_io": false, 00:13:49.571 "nvme_io_md": false, 00:13:49.571 "write_zeroes": true, 00:13:49.571 "zcopy": true, 00:13:49.571 "get_zone_info": false, 00:13:49.571 "zone_management": false, 00:13:49.571 "zone_append": false, 00:13:49.571 "compare": false, 00:13:49.571 "compare_and_write": false, 00:13:49.571 "abort": true, 00:13:49.571 "seek_hole": false, 00:13:49.571 "seek_data": false, 00:13:49.571 "copy": true, 00:13:49.571 "nvme_iov_md": false 00:13:49.571 }, 00:13:49.571 "memory_domains": [ 00:13:49.571 { 00:13:49.571 "dma_device_id": "system", 00:13:49.571 "dma_device_type": 1 00:13:49.571 }, 00:13:49.571 { 00:13:49.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.571 "dma_device_type": 2 00:13:49.571 } 00:13:49.571 ], 00:13:49.571 "driver_specific": {} 00:13:49.571 } 00:13:49.571 ] 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.571 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.572 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.572 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.572 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.572 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.572 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.572 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.572 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.572 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.572 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.572 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.572 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.830 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.830 "name": "Existed_Raid", 00:13:49.830 "uuid": "9bceb150-25a4-417d-84b6-0a2faba4d746", 00:13:49.830 "strip_size_kb": 64, 00:13:49.830 "state": "online", 00:13:49.830 "raid_level": "raid5f", 00:13:49.830 "superblock": false, 00:13:49.830 "num_base_bdevs": 3, 00:13:49.830 "num_base_bdevs_discovered": 3, 00:13:49.830 "num_base_bdevs_operational": 3, 00:13:49.830 "base_bdevs_list": [ 00:13:49.830 { 00:13:49.830 "name": "BaseBdev1", 00:13:49.830 "uuid": "d9a563bd-7d46-44b5-bb96-d6dbbef6010a", 00:13:49.830 "is_configured": true, 00:13:49.830 "data_offset": 0, 00:13:49.830 "data_size": 65536 00:13:49.830 }, 00:13:49.830 { 00:13:49.830 "name": "BaseBdev2", 00:13:49.830 "uuid": "999fb9ea-5f94-4005-981e-0b38440068f6", 00:13:49.830 "is_configured": true, 00:13:49.830 "data_offset": 0, 00:13:49.830 "data_size": 65536 00:13:49.830 }, 00:13:49.830 { 00:13:49.830 "name": "BaseBdev3", 00:13:49.830 "uuid": "d5a465d0-989e-4962-8b28-c7d8ac206d23", 00:13:49.830 "is_configured": true, 00:13:49.830 "data_offset": 0, 00:13:49.831 "data_size": 65536 00:13:49.831 } 00:13:49.831 ] 00:13:49.831 }' 00:13:49.831 02:47:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.831 02:47:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.091 [2024-12-07 02:47:01.044702] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:50.091 "name": "Existed_Raid", 00:13:50.091 "aliases": [ 00:13:50.091 "9bceb150-25a4-417d-84b6-0a2faba4d746" 00:13:50.091 ], 00:13:50.091 "product_name": "Raid Volume", 00:13:50.091 "block_size": 512, 00:13:50.091 "num_blocks": 131072, 00:13:50.091 "uuid": "9bceb150-25a4-417d-84b6-0a2faba4d746", 00:13:50.091 "assigned_rate_limits": { 00:13:50.091 "rw_ios_per_sec": 0, 00:13:50.091 "rw_mbytes_per_sec": 0, 00:13:50.091 "r_mbytes_per_sec": 0, 00:13:50.091 "w_mbytes_per_sec": 0 00:13:50.091 }, 00:13:50.091 "claimed": false, 00:13:50.091 "zoned": false, 00:13:50.091 "supported_io_types": { 00:13:50.091 "read": true, 00:13:50.091 "write": true, 00:13:50.091 "unmap": false, 00:13:50.091 "flush": false, 00:13:50.091 "reset": true, 00:13:50.091 "nvme_admin": false, 00:13:50.091 "nvme_io": false, 00:13:50.091 "nvme_io_md": false, 00:13:50.091 "write_zeroes": true, 00:13:50.091 "zcopy": false, 00:13:50.091 "get_zone_info": false, 00:13:50.091 "zone_management": false, 00:13:50.091 "zone_append": false, 00:13:50.091 "compare": false, 00:13:50.091 "compare_and_write": false, 00:13:50.091 "abort": false, 00:13:50.091 "seek_hole": false, 00:13:50.091 "seek_data": false, 00:13:50.091 "copy": false, 00:13:50.091 "nvme_iov_md": false 00:13:50.091 }, 00:13:50.091 "driver_specific": { 00:13:50.091 "raid": { 00:13:50.091 "uuid": "9bceb150-25a4-417d-84b6-0a2faba4d746", 00:13:50.091 "strip_size_kb": 64, 00:13:50.091 "state": "online", 00:13:50.091 "raid_level": "raid5f", 00:13:50.091 "superblock": false, 00:13:50.091 "num_base_bdevs": 3, 00:13:50.091 "num_base_bdevs_discovered": 3, 00:13:50.091 "num_base_bdevs_operational": 3, 00:13:50.091 "base_bdevs_list": [ 00:13:50.091 { 00:13:50.091 "name": "BaseBdev1", 00:13:50.091 "uuid": "d9a563bd-7d46-44b5-bb96-d6dbbef6010a", 00:13:50.091 "is_configured": true, 00:13:50.091 "data_offset": 0, 00:13:50.091 "data_size": 65536 00:13:50.091 }, 00:13:50.091 { 00:13:50.091 "name": "BaseBdev2", 00:13:50.091 "uuid": "999fb9ea-5f94-4005-981e-0b38440068f6", 00:13:50.091 "is_configured": true, 00:13:50.091 "data_offset": 0, 00:13:50.091 "data_size": 65536 00:13:50.091 }, 00:13:50.091 { 00:13:50.091 "name": "BaseBdev3", 00:13:50.091 "uuid": "d5a465d0-989e-4962-8b28-c7d8ac206d23", 00:13:50.091 "is_configured": true, 00:13:50.091 "data_offset": 0, 00:13:50.091 "data_size": 65536 00:13:50.091 } 00:13:50.091 ] 00:13:50.091 } 00:13:50.091 } 00:13:50.091 }' 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:50.091 BaseBdev2 00:13:50.091 BaseBdev3' 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.091 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.351 [2024-12-07 02:47:01.240237] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.351 "name": "Existed_Raid", 00:13:50.351 "uuid": "9bceb150-25a4-417d-84b6-0a2faba4d746", 00:13:50.351 "strip_size_kb": 64, 00:13:50.351 "state": "online", 00:13:50.351 "raid_level": "raid5f", 00:13:50.351 "superblock": false, 00:13:50.351 "num_base_bdevs": 3, 00:13:50.351 "num_base_bdevs_discovered": 2, 00:13:50.351 "num_base_bdevs_operational": 2, 00:13:50.351 "base_bdevs_list": [ 00:13:50.351 { 00:13:50.351 "name": null, 00:13:50.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.351 "is_configured": false, 00:13:50.351 "data_offset": 0, 00:13:50.351 "data_size": 65536 00:13:50.351 }, 00:13:50.351 { 00:13:50.351 "name": "BaseBdev2", 00:13:50.351 "uuid": "999fb9ea-5f94-4005-981e-0b38440068f6", 00:13:50.351 "is_configured": true, 00:13:50.351 "data_offset": 0, 00:13:50.351 "data_size": 65536 00:13:50.351 }, 00:13:50.351 { 00:13:50.351 "name": "BaseBdev3", 00:13:50.351 "uuid": "d5a465d0-989e-4962-8b28-c7d8ac206d23", 00:13:50.351 "is_configured": true, 00:13:50.351 "data_offset": 0, 00:13:50.351 "data_size": 65536 00:13:50.351 } 00:13:50.351 ] 00:13:50.351 }' 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.351 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.922 [2024-12-07 02:47:01.783292] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:50.922 [2024-12-07 02:47:01.783383] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.922 [2024-12-07 02:47:01.803656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.922 [2024-12-07 02:47:01.863597] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:50.922 [2024-12-07 02:47:01.863638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.922 BaseBdev2 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.922 [ 00:13:50.922 { 00:13:50.922 "name": "BaseBdev2", 00:13:50.922 "aliases": [ 00:13:50.922 "bb3873ab-18ca-44d1-bd73-33231269a5a0" 00:13:50.922 ], 00:13:50.922 "product_name": "Malloc disk", 00:13:50.922 "block_size": 512, 00:13:50.922 "num_blocks": 65536, 00:13:50.922 "uuid": "bb3873ab-18ca-44d1-bd73-33231269a5a0", 00:13:50.922 "assigned_rate_limits": { 00:13:50.922 "rw_ios_per_sec": 0, 00:13:50.922 "rw_mbytes_per_sec": 0, 00:13:50.922 "r_mbytes_per_sec": 0, 00:13:50.922 "w_mbytes_per_sec": 0 00:13:50.922 }, 00:13:50.922 "claimed": false, 00:13:50.922 "zoned": false, 00:13:50.922 "supported_io_types": { 00:13:50.922 "read": true, 00:13:50.922 "write": true, 00:13:50.922 "unmap": true, 00:13:50.922 "flush": true, 00:13:50.922 "reset": true, 00:13:50.922 "nvme_admin": false, 00:13:50.922 "nvme_io": false, 00:13:50.922 "nvme_io_md": false, 00:13:50.922 "write_zeroes": true, 00:13:50.922 "zcopy": true, 00:13:50.922 "get_zone_info": false, 00:13:50.922 "zone_management": false, 00:13:50.922 "zone_append": false, 00:13:50.922 "compare": false, 00:13:50.922 "compare_and_write": false, 00:13:50.922 "abort": true, 00:13:50.922 "seek_hole": false, 00:13:50.922 "seek_data": false, 00:13:50.922 "copy": true, 00:13:50.922 "nvme_iov_md": false 00:13:50.922 }, 00:13:50.922 "memory_domains": [ 00:13:50.922 { 00:13:50.922 "dma_device_id": "system", 00:13:50.922 "dma_device_type": 1 00:13:50.922 }, 00:13:50.922 { 00:13:50.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.922 "dma_device_type": 2 00:13:50.922 } 00:13:50.922 ], 00:13:50.922 "driver_specific": {} 00:13:50.922 } 00:13:50.922 ] 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.922 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.182 BaseBdev3 00:13:51.183 02:47:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.183 02:47:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.183 [ 00:13:51.183 { 00:13:51.183 "name": "BaseBdev3", 00:13:51.183 "aliases": [ 00:13:51.183 "77e705d5-8643-4e72-9966-dd83f9cd5d88" 00:13:51.183 ], 00:13:51.183 "product_name": "Malloc disk", 00:13:51.183 "block_size": 512, 00:13:51.183 "num_blocks": 65536, 00:13:51.183 "uuid": "77e705d5-8643-4e72-9966-dd83f9cd5d88", 00:13:51.183 "assigned_rate_limits": { 00:13:51.183 "rw_ios_per_sec": 0, 00:13:51.183 "rw_mbytes_per_sec": 0, 00:13:51.183 "r_mbytes_per_sec": 0, 00:13:51.183 "w_mbytes_per_sec": 0 00:13:51.183 }, 00:13:51.183 "claimed": false, 00:13:51.183 "zoned": false, 00:13:51.183 "supported_io_types": { 00:13:51.183 "read": true, 00:13:51.183 "write": true, 00:13:51.183 "unmap": true, 00:13:51.183 "flush": true, 00:13:51.183 "reset": true, 00:13:51.183 "nvme_admin": false, 00:13:51.183 "nvme_io": false, 00:13:51.183 "nvme_io_md": false, 00:13:51.183 "write_zeroes": true, 00:13:51.183 "zcopy": true, 00:13:51.183 "get_zone_info": false, 00:13:51.183 "zone_management": false, 00:13:51.183 "zone_append": false, 00:13:51.183 "compare": false, 00:13:51.183 "compare_and_write": false, 00:13:51.183 "abort": true, 00:13:51.183 "seek_hole": false, 00:13:51.183 "seek_data": false, 00:13:51.183 "copy": true, 00:13:51.183 "nvme_iov_md": false 00:13:51.183 }, 00:13:51.183 "memory_domains": [ 00:13:51.183 { 00:13:51.183 "dma_device_id": "system", 00:13:51.183 "dma_device_type": 1 00:13:51.183 }, 00:13:51.183 { 00:13:51.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.183 "dma_device_type": 2 00:13:51.183 } 00:13:51.183 ], 00:13:51.183 "driver_specific": {} 00:13:51.183 } 00:13:51.183 ] 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.183 [2024-12-07 02:47:02.043847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.183 [2024-12-07 02:47:02.044001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.183 [2024-12-07 02:47:02.044048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.183 [2024-12-07 02:47:02.046163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.183 "name": "Existed_Raid", 00:13:51.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.183 "strip_size_kb": 64, 00:13:51.183 "state": "configuring", 00:13:51.183 "raid_level": "raid5f", 00:13:51.183 "superblock": false, 00:13:51.183 "num_base_bdevs": 3, 00:13:51.183 "num_base_bdevs_discovered": 2, 00:13:51.183 "num_base_bdevs_operational": 3, 00:13:51.183 "base_bdevs_list": [ 00:13:51.183 { 00:13:51.183 "name": "BaseBdev1", 00:13:51.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.183 "is_configured": false, 00:13:51.183 "data_offset": 0, 00:13:51.183 "data_size": 0 00:13:51.183 }, 00:13:51.183 { 00:13:51.183 "name": "BaseBdev2", 00:13:51.183 "uuid": "bb3873ab-18ca-44d1-bd73-33231269a5a0", 00:13:51.183 "is_configured": true, 00:13:51.183 "data_offset": 0, 00:13:51.183 "data_size": 65536 00:13:51.183 }, 00:13:51.183 { 00:13:51.183 "name": "BaseBdev3", 00:13:51.183 "uuid": "77e705d5-8643-4e72-9966-dd83f9cd5d88", 00:13:51.183 "is_configured": true, 00:13:51.183 "data_offset": 0, 00:13:51.183 "data_size": 65536 00:13:51.183 } 00:13:51.183 ] 00:13:51.183 }' 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.183 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.443 [2024-12-07 02:47:02.491012] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.443 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.703 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.703 "name": "Existed_Raid", 00:13:51.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.703 "strip_size_kb": 64, 00:13:51.703 "state": "configuring", 00:13:51.703 "raid_level": "raid5f", 00:13:51.703 "superblock": false, 00:13:51.703 "num_base_bdevs": 3, 00:13:51.703 "num_base_bdevs_discovered": 1, 00:13:51.703 "num_base_bdevs_operational": 3, 00:13:51.703 "base_bdevs_list": [ 00:13:51.703 { 00:13:51.703 "name": "BaseBdev1", 00:13:51.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.703 "is_configured": false, 00:13:51.703 "data_offset": 0, 00:13:51.703 "data_size": 0 00:13:51.703 }, 00:13:51.703 { 00:13:51.703 "name": null, 00:13:51.703 "uuid": "bb3873ab-18ca-44d1-bd73-33231269a5a0", 00:13:51.703 "is_configured": false, 00:13:51.703 "data_offset": 0, 00:13:51.703 "data_size": 65536 00:13:51.703 }, 00:13:51.703 { 00:13:51.703 "name": "BaseBdev3", 00:13:51.703 "uuid": "77e705d5-8643-4e72-9966-dd83f9cd5d88", 00:13:51.703 "is_configured": true, 00:13:51.703 "data_offset": 0, 00:13:51.703 "data_size": 65536 00:13:51.703 } 00:13:51.703 ] 00:13:51.703 }' 00:13:51.703 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.703 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.963 [2024-12-07 02:47:02.990658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.963 BaseBdev1 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.963 02:47:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.963 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.963 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:51.963 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.963 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.963 [ 00:13:51.963 { 00:13:51.963 "name": "BaseBdev1", 00:13:51.963 "aliases": [ 00:13:51.964 "71c209b0-b4cf-481e-98d0-ebc17af9f7d5" 00:13:51.964 ], 00:13:51.964 "product_name": "Malloc disk", 00:13:51.964 "block_size": 512, 00:13:51.964 "num_blocks": 65536, 00:13:51.964 "uuid": "71c209b0-b4cf-481e-98d0-ebc17af9f7d5", 00:13:51.964 "assigned_rate_limits": { 00:13:51.964 "rw_ios_per_sec": 0, 00:13:51.964 "rw_mbytes_per_sec": 0, 00:13:51.964 "r_mbytes_per_sec": 0, 00:13:51.964 "w_mbytes_per_sec": 0 00:13:51.964 }, 00:13:51.964 "claimed": true, 00:13:51.964 "claim_type": "exclusive_write", 00:13:51.964 "zoned": false, 00:13:51.964 "supported_io_types": { 00:13:51.964 "read": true, 00:13:51.964 "write": true, 00:13:51.964 "unmap": true, 00:13:51.964 "flush": true, 00:13:51.964 "reset": true, 00:13:51.964 "nvme_admin": false, 00:13:51.964 "nvme_io": false, 00:13:51.964 "nvme_io_md": false, 00:13:51.964 "write_zeroes": true, 00:13:51.964 "zcopy": true, 00:13:51.964 "get_zone_info": false, 00:13:51.964 "zone_management": false, 00:13:51.964 "zone_append": false, 00:13:51.964 "compare": false, 00:13:51.964 "compare_and_write": false, 00:13:51.964 "abort": true, 00:13:51.964 "seek_hole": false, 00:13:51.964 "seek_data": false, 00:13:51.964 "copy": true, 00:13:51.964 "nvme_iov_md": false 00:13:51.964 }, 00:13:51.964 "memory_domains": [ 00:13:51.964 { 00:13:51.964 "dma_device_id": "system", 00:13:51.964 "dma_device_type": 1 00:13:51.964 }, 00:13:51.964 { 00:13:51.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.964 "dma_device_type": 2 00:13:51.964 } 00:13:51.964 ], 00:13:51.964 "driver_specific": {} 00:13:51.964 } 00:13:51.964 ] 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.964 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.224 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.224 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.224 "name": "Existed_Raid", 00:13:52.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.224 "strip_size_kb": 64, 00:13:52.224 "state": "configuring", 00:13:52.224 "raid_level": "raid5f", 00:13:52.224 "superblock": false, 00:13:52.224 "num_base_bdevs": 3, 00:13:52.224 "num_base_bdevs_discovered": 2, 00:13:52.224 "num_base_bdevs_operational": 3, 00:13:52.224 "base_bdevs_list": [ 00:13:52.224 { 00:13:52.224 "name": "BaseBdev1", 00:13:52.224 "uuid": "71c209b0-b4cf-481e-98d0-ebc17af9f7d5", 00:13:52.224 "is_configured": true, 00:13:52.224 "data_offset": 0, 00:13:52.224 "data_size": 65536 00:13:52.224 }, 00:13:52.224 { 00:13:52.224 "name": null, 00:13:52.224 "uuid": "bb3873ab-18ca-44d1-bd73-33231269a5a0", 00:13:52.224 "is_configured": false, 00:13:52.224 "data_offset": 0, 00:13:52.224 "data_size": 65536 00:13:52.224 }, 00:13:52.224 { 00:13:52.224 "name": "BaseBdev3", 00:13:52.224 "uuid": "77e705d5-8643-4e72-9966-dd83f9cd5d88", 00:13:52.224 "is_configured": true, 00:13:52.224 "data_offset": 0, 00:13:52.224 "data_size": 65536 00:13:52.224 } 00:13:52.224 ] 00:13:52.224 }' 00:13:52.224 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.224 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.484 [2024-12-07 02:47:03.501788] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.484 "name": "Existed_Raid", 00:13:52.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.484 "strip_size_kb": 64, 00:13:52.484 "state": "configuring", 00:13:52.484 "raid_level": "raid5f", 00:13:52.484 "superblock": false, 00:13:52.484 "num_base_bdevs": 3, 00:13:52.484 "num_base_bdevs_discovered": 1, 00:13:52.484 "num_base_bdevs_operational": 3, 00:13:52.484 "base_bdevs_list": [ 00:13:52.484 { 00:13:52.484 "name": "BaseBdev1", 00:13:52.484 "uuid": "71c209b0-b4cf-481e-98d0-ebc17af9f7d5", 00:13:52.484 "is_configured": true, 00:13:52.484 "data_offset": 0, 00:13:52.484 "data_size": 65536 00:13:52.484 }, 00:13:52.484 { 00:13:52.484 "name": null, 00:13:52.484 "uuid": "bb3873ab-18ca-44d1-bd73-33231269a5a0", 00:13:52.484 "is_configured": false, 00:13:52.484 "data_offset": 0, 00:13:52.484 "data_size": 65536 00:13:52.484 }, 00:13:52.484 { 00:13:52.484 "name": null, 00:13:52.484 "uuid": "77e705d5-8643-4e72-9966-dd83f9cd5d88", 00:13:52.484 "is_configured": false, 00:13:52.484 "data_offset": 0, 00:13:52.484 "data_size": 65536 00:13:52.484 } 00:13:52.484 ] 00:13:52.484 }' 00:13:52.484 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.485 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.053 [2024-12-07 02:47:03.988990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.053 02:47:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.053 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.053 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.053 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.053 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.053 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.053 "name": "Existed_Raid", 00:13:53.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.053 "strip_size_kb": 64, 00:13:53.053 "state": "configuring", 00:13:53.053 "raid_level": "raid5f", 00:13:53.053 "superblock": false, 00:13:53.053 "num_base_bdevs": 3, 00:13:53.053 "num_base_bdevs_discovered": 2, 00:13:53.053 "num_base_bdevs_operational": 3, 00:13:53.053 "base_bdevs_list": [ 00:13:53.053 { 00:13:53.053 "name": "BaseBdev1", 00:13:53.053 "uuid": "71c209b0-b4cf-481e-98d0-ebc17af9f7d5", 00:13:53.053 "is_configured": true, 00:13:53.053 "data_offset": 0, 00:13:53.053 "data_size": 65536 00:13:53.053 }, 00:13:53.053 { 00:13:53.053 "name": null, 00:13:53.053 "uuid": "bb3873ab-18ca-44d1-bd73-33231269a5a0", 00:13:53.053 "is_configured": false, 00:13:53.053 "data_offset": 0, 00:13:53.053 "data_size": 65536 00:13:53.053 }, 00:13:53.053 { 00:13:53.053 "name": "BaseBdev3", 00:13:53.053 "uuid": "77e705d5-8643-4e72-9966-dd83f9cd5d88", 00:13:53.053 "is_configured": true, 00:13:53.053 "data_offset": 0, 00:13:53.053 "data_size": 65536 00:13:53.053 } 00:13:53.053 ] 00:13:53.053 }' 00:13:53.053 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.053 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.622 [2024-12-07 02:47:04.504082] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.622 "name": "Existed_Raid", 00:13:53.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.622 "strip_size_kb": 64, 00:13:53.622 "state": "configuring", 00:13:53.622 "raid_level": "raid5f", 00:13:53.622 "superblock": false, 00:13:53.622 "num_base_bdevs": 3, 00:13:53.622 "num_base_bdevs_discovered": 1, 00:13:53.622 "num_base_bdevs_operational": 3, 00:13:53.622 "base_bdevs_list": [ 00:13:53.622 { 00:13:53.622 "name": null, 00:13:53.622 "uuid": "71c209b0-b4cf-481e-98d0-ebc17af9f7d5", 00:13:53.622 "is_configured": false, 00:13:53.622 "data_offset": 0, 00:13:53.622 "data_size": 65536 00:13:53.622 }, 00:13:53.622 { 00:13:53.622 "name": null, 00:13:53.622 "uuid": "bb3873ab-18ca-44d1-bd73-33231269a5a0", 00:13:53.622 "is_configured": false, 00:13:53.622 "data_offset": 0, 00:13:53.622 "data_size": 65536 00:13:53.622 }, 00:13:53.622 { 00:13:53.622 "name": "BaseBdev3", 00:13:53.622 "uuid": "77e705d5-8643-4e72-9966-dd83f9cd5d88", 00:13:53.622 "is_configured": true, 00:13:53.622 "data_offset": 0, 00:13:53.622 "data_size": 65536 00:13:53.622 } 00:13:53.622 ] 00:13:53.622 }' 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.622 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.882 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:53.882 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.882 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.882 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.141 [2024-12-07 02:47:04.982613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.141 02:47:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.141 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.141 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.141 "name": "Existed_Raid", 00:13:54.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.141 "strip_size_kb": 64, 00:13:54.141 "state": "configuring", 00:13:54.141 "raid_level": "raid5f", 00:13:54.141 "superblock": false, 00:13:54.141 "num_base_bdevs": 3, 00:13:54.141 "num_base_bdevs_discovered": 2, 00:13:54.141 "num_base_bdevs_operational": 3, 00:13:54.141 "base_bdevs_list": [ 00:13:54.141 { 00:13:54.141 "name": null, 00:13:54.141 "uuid": "71c209b0-b4cf-481e-98d0-ebc17af9f7d5", 00:13:54.141 "is_configured": false, 00:13:54.141 "data_offset": 0, 00:13:54.141 "data_size": 65536 00:13:54.141 }, 00:13:54.141 { 00:13:54.141 "name": "BaseBdev2", 00:13:54.141 "uuid": "bb3873ab-18ca-44d1-bd73-33231269a5a0", 00:13:54.141 "is_configured": true, 00:13:54.141 "data_offset": 0, 00:13:54.141 "data_size": 65536 00:13:54.141 }, 00:13:54.141 { 00:13:54.141 "name": "BaseBdev3", 00:13:54.141 "uuid": "77e705d5-8643-4e72-9966-dd83f9cd5d88", 00:13:54.141 "is_configured": true, 00:13:54.141 "data_offset": 0, 00:13:54.141 "data_size": 65536 00:13:54.141 } 00:13:54.141 ] 00:13:54.141 }' 00:13:54.141 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.141 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.400 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.400 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.400 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:54.400 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 71c209b0-b4cf-481e-98d0-ebc17af9f7d5 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.659 [2024-12-07 02:47:05.583776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:54.659 [2024-12-07 02:47:05.583886] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:13:54.659 [2024-12-07 02:47:05.583903] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:54.659 [2024-12-07 02:47:05.584211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:54.659 [2024-12-07 02:47:05.584693] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:13:54.659 [2024-12-07 02:47:05.584706] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:13:54.659 [2024-12-07 02:47:05.584896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.659 NewBaseBdev 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.659 [ 00:13:54.659 { 00:13:54.659 "name": "NewBaseBdev", 00:13:54.659 "aliases": [ 00:13:54.659 "71c209b0-b4cf-481e-98d0-ebc17af9f7d5" 00:13:54.659 ], 00:13:54.659 "product_name": "Malloc disk", 00:13:54.659 "block_size": 512, 00:13:54.659 "num_blocks": 65536, 00:13:54.659 "uuid": "71c209b0-b4cf-481e-98d0-ebc17af9f7d5", 00:13:54.659 "assigned_rate_limits": { 00:13:54.659 "rw_ios_per_sec": 0, 00:13:54.659 "rw_mbytes_per_sec": 0, 00:13:54.659 "r_mbytes_per_sec": 0, 00:13:54.659 "w_mbytes_per_sec": 0 00:13:54.659 }, 00:13:54.659 "claimed": true, 00:13:54.659 "claim_type": "exclusive_write", 00:13:54.659 "zoned": false, 00:13:54.659 "supported_io_types": { 00:13:54.659 "read": true, 00:13:54.659 "write": true, 00:13:54.659 "unmap": true, 00:13:54.659 "flush": true, 00:13:54.659 "reset": true, 00:13:54.659 "nvme_admin": false, 00:13:54.659 "nvme_io": false, 00:13:54.659 "nvme_io_md": false, 00:13:54.659 "write_zeroes": true, 00:13:54.659 "zcopy": true, 00:13:54.659 "get_zone_info": false, 00:13:54.659 "zone_management": false, 00:13:54.659 "zone_append": false, 00:13:54.659 "compare": false, 00:13:54.659 "compare_and_write": false, 00:13:54.659 "abort": true, 00:13:54.659 "seek_hole": false, 00:13:54.659 "seek_data": false, 00:13:54.659 "copy": true, 00:13:54.659 "nvme_iov_md": false 00:13:54.659 }, 00:13:54.659 "memory_domains": [ 00:13:54.659 { 00:13:54.659 "dma_device_id": "system", 00:13:54.659 "dma_device_type": 1 00:13:54.659 }, 00:13:54.659 { 00:13:54.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.659 "dma_device_type": 2 00:13:54.659 } 00:13:54.659 ], 00:13:54.659 "driver_specific": {} 00:13:54.659 } 00:13:54.659 ] 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.659 "name": "Existed_Raid", 00:13:54.659 "uuid": "e26b0b5a-46bd-45e3-947e-08281a522d90", 00:13:54.659 "strip_size_kb": 64, 00:13:54.659 "state": "online", 00:13:54.659 "raid_level": "raid5f", 00:13:54.659 "superblock": false, 00:13:54.659 "num_base_bdevs": 3, 00:13:54.659 "num_base_bdevs_discovered": 3, 00:13:54.659 "num_base_bdevs_operational": 3, 00:13:54.659 "base_bdevs_list": [ 00:13:54.659 { 00:13:54.659 "name": "NewBaseBdev", 00:13:54.659 "uuid": "71c209b0-b4cf-481e-98d0-ebc17af9f7d5", 00:13:54.659 "is_configured": true, 00:13:54.659 "data_offset": 0, 00:13:54.659 "data_size": 65536 00:13:54.659 }, 00:13:54.659 { 00:13:54.659 "name": "BaseBdev2", 00:13:54.659 "uuid": "bb3873ab-18ca-44d1-bd73-33231269a5a0", 00:13:54.659 "is_configured": true, 00:13:54.659 "data_offset": 0, 00:13:54.659 "data_size": 65536 00:13:54.659 }, 00:13:54.659 { 00:13:54.659 "name": "BaseBdev3", 00:13:54.659 "uuid": "77e705d5-8643-4e72-9966-dd83f9cd5d88", 00:13:54.659 "is_configured": true, 00:13:54.659 "data_offset": 0, 00:13:54.659 "data_size": 65536 00:13:54.659 } 00:13:54.659 ] 00:13:54.659 }' 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.659 02:47:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.226 [2024-12-07 02:47:06.023395] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:55.226 "name": "Existed_Raid", 00:13:55.226 "aliases": [ 00:13:55.226 "e26b0b5a-46bd-45e3-947e-08281a522d90" 00:13:55.226 ], 00:13:55.226 "product_name": "Raid Volume", 00:13:55.226 "block_size": 512, 00:13:55.226 "num_blocks": 131072, 00:13:55.226 "uuid": "e26b0b5a-46bd-45e3-947e-08281a522d90", 00:13:55.226 "assigned_rate_limits": { 00:13:55.226 "rw_ios_per_sec": 0, 00:13:55.226 "rw_mbytes_per_sec": 0, 00:13:55.226 "r_mbytes_per_sec": 0, 00:13:55.226 "w_mbytes_per_sec": 0 00:13:55.226 }, 00:13:55.226 "claimed": false, 00:13:55.226 "zoned": false, 00:13:55.226 "supported_io_types": { 00:13:55.226 "read": true, 00:13:55.226 "write": true, 00:13:55.226 "unmap": false, 00:13:55.226 "flush": false, 00:13:55.226 "reset": true, 00:13:55.226 "nvme_admin": false, 00:13:55.226 "nvme_io": false, 00:13:55.226 "nvme_io_md": false, 00:13:55.226 "write_zeroes": true, 00:13:55.226 "zcopy": false, 00:13:55.226 "get_zone_info": false, 00:13:55.226 "zone_management": false, 00:13:55.226 "zone_append": false, 00:13:55.226 "compare": false, 00:13:55.226 "compare_and_write": false, 00:13:55.226 "abort": false, 00:13:55.226 "seek_hole": false, 00:13:55.226 "seek_data": false, 00:13:55.226 "copy": false, 00:13:55.226 "nvme_iov_md": false 00:13:55.226 }, 00:13:55.226 "driver_specific": { 00:13:55.226 "raid": { 00:13:55.226 "uuid": "e26b0b5a-46bd-45e3-947e-08281a522d90", 00:13:55.226 "strip_size_kb": 64, 00:13:55.226 "state": "online", 00:13:55.226 "raid_level": "raid5f", 00:13:55.226 "superblock": false, 00:13:55.226 "num_base_bdevs": 3, 00:13:55.226 "num_base_bdevs_discovered": 3, 00:13:55.226 "num_base_bdevs_operational": 3, 00:13:55.226 "base_bdevs_list": [ 00:13:55.226 { 00:13:55.226 "name": "NewBaseBdev", 00:13:55.226 "uuid": "71c209b0-b4cf-481e-98d0-ebc17af9f7d5", 00:13:55.226 "is_configured": true, 00:13:55.226 "data_offset": 0, 00:13:55.226 "data_size": 65536 00:13:55.226 }, 00:13:55.226 { 00:13:55.226 "name": "BaseBdev2", 00:13:55.226 "uuid": "bb3873ab-18ca-44d1-bd73-33231269a5a0", 00:13:55.226 "is_configured": true, 00:13:55.226 "data_offset": 0, 00:13:55.226 "data_size": 65536 00:13:55.226 }, 00:13:55.226 { 00:13:55.226 "name": "BaseBdev3", 00:13:55.226 "uuid": "77e705d5-8643-4e72-9966-dd83f9cd5d88", 00:13:55.226 "is_configured": true, 00:13:55.226 "data_offset": 0, 00:13:55.226 "data_size": 65536 00:13:55.226 } 00:13:55.226 ] 00:13:55.226 } 00:13:55.226 } 00:13:55.226 }' 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:55.226 BaseBdev2 00:13:55.226 BaseBdev3' 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.226 [2024-12-07 02:47:06.294752] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.226 [2024-12-07 02:47:06.294774] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.226 [2024-12-07 02:47:06.294833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.226 [2024-12-07 02:47:06.295077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.226 [2024-12-07 02:47:06.295098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90710 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90710 ']' 00:13:55.226 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90710 00:13:55.485 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:55.485 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.485 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90710 00:13:55.485 killing process with pid 90710 00:13:55.485 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:55.485 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:55.485 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90710' 00:13:55.485 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90710 00:13:55.485 [2024-12-07 02:47:06.344214] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:55.485 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90710 00:13:55.485 [2024-12-07 02:47:06.402375] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:55.745 02:47:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:55.745 00:13:55.745 real 0m9.135s 00:13:55.745 user 0m15.281s 00:13:55.745 sys 0m2.040s 00:13:55.745 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.745 02:47:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.745 ************************************ 00:13:55.745 END TEST raid5f_state_function_test 00:13:55.745 ************************************ 00:13:56.004 02:47:06 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:56.004 02:47:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:56.004 02:47:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:56.004 02:47:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.004 ************************************ 00:13:56.004 START TEST raid5f_state_function_test_sb 00:13:56.004 ************************************ 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:56.004 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=91320 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:56.005 Process raid pid: 91320 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91320' 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 91320 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 91320 ']' 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.005 02:47:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.005 [2024-12-07 02:47:06.958842] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:13:56.005 [2024-12-07 02:47:06.959473] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.264 [2024-12-07 02:47:07.119814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.264 [2024-12-07 02:47:07.189216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.264 [2024-12-07 02:47:07.264178] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.264 [2024-12-07 02:47:07.264316] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.833 [2024-12-07 02:47:07.787138] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:56.833 [2024-12-07 02:47:07.787249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:56.833 [2024-12-07 02:47:07.787286] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.833 [2024-12-07 02:47:07.787312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.833 [2024-12-07 02:47:07.787329] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:56.833 [2024-12-07 02:47:07.787353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.833 "name": "Existed_Raid", 00:13:56.833 "uuid": "21dd6bba-3a55-4901-b4a4-cc728695028e", 00:13:56.833 "strip_size_kb": 64, 00:13:56.833 "state": "configuring", 00:13:56.833 "raid_level": "raid5f", 00:13:56.833 "superblock": true, 00:13:56.833 "num_base_bdevs": 3, 00:13:56.833 "num_base_bdevs_discovered": 0, 00:13:56.833 "num_base_bdevs_operational": 3, 00:13:56.833 "base_bdevs_list": [ 00:13:56.833 { 00:13:56.833 "name": "BaseBdev1", 00:13:56.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.833 "is_configured": false, 00:13:56.833 "data_offset": 0, 00:13:56.833 "data_size": 0 00:13:56.833 }, 00:13:56.833 { 00:13:56.833 "name": "BaseBdev2", 00:13:56.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.833 "is_configured": false, 00:13:56.833 "data_offset": 0, 00:13:56.833 "data_size": 0 00:13:56.833 }, 00:13:56.833 { 00:13:56.833 "name": "BaseBdev3", 00:13:56.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.833 "is_configured": false, 00:13:56.833 "data_offset": 0, 00:13:56.833 "data_size": 0 00:13:56.833 } 00:13:56.833 ] 00:13:56.833 }' 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.833 02:47:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.408 [2024-12-07 02:47:08.258280] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:57.408 [2024-12-07 02:47:08.258323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.408 [2024-12-07 02:47:08.270292] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:57.408 [2024-12-07 02:47:08.270328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:57.408 [2024-12-07 02:47:08.270335] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:57.408 [2024-12-07 02:47:08.270345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:57.408 [2024-12-07 02:47:08.270350] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:57.408 [2024-12-07 02:47:08.270359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.408 [2024-12-07 02:47:08.296942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.408 BaseBdev1 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.408 [ 00:13:57.408 { 00:13:57.408 "name": "BaseBdev1", 00:13:57.408 "aliases": [ 00:13:57.408 "a77bee19-b2f4-4ba6-bb82-461a4faadeb0" 00:13:57.408 ], 00:13:57.408 "product_name": "Malloc disk", 00:13:57.408 "block_size": 512, 00:13:57.408 "num_blocks": 65536, 00:13:57.408 "uuid": "a77bee19-b2f4-4ba6-bb82-461a4faadeb0", 00:13:57.408 "assigned_rate_limits": { 00:13:57.408 "rw_ios_per_sec": 0, 00:13:57.408 "rw_mbytes_per_sec": 0, 00:13:57.408 "r_mbytes_per_sec": 0, 00:13:57.408 "w_mbytes_per_sec": 0 00:13:57.408 }, 00:13:57.408 "claimed": true, 00:13:57.408 "claim_type": "exclusive_write", 00:13:57.408 "zoned": false, 00:13:57.408 "supported_io_types": { 00:13:57.408 "read": true, 00:13:57.408 "write": true, 00:13:57.408 "unmap": true, 00:13:57.408 "flush": true, 00:13:57.408 "reset": true, 00:13:57.408 "nvme_admin": false, 00:13:57.408 "nvme_io": false, 00:13:57.408 "nvme_io_md": false, 00:13:57.408 "write_zeroes": true, 00:13:57.408 "zcopy": true, 00:13:57.408 "get_zone_info": false, 00:13:57.408 "zone_management": false, 00:13:57.408 "zone_append": false, 00:13:57.408 "compare": false, 00:13:57.408 "compare_and_write": false, 00:13:57.408 "abort": true, 00:13:57.408 "seek_hole": false, 00:13:57.408 "seek_data": false, 00:13:57.408 "copy": true, 00:13:57.408 "nvme_iov_md": false 00:13:57.408 }, 00:13:57.408 "memory_domains": [ 00:13:57.408 { 00:13:57.408 "dma_device_id": "system", 00:13:57.408 "dma_device_type": 1 00:13:57.408 }, 00:13:57.408 { 00:13:57.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.408 "dma_device_type": 2 00:13:57.408 } 00:13:57.408 ], 00:13:57.408 "driver_specific": {} 00:13:57.408 } 00:13:57.408 ] 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.408 "name": "Existed_Raid", 00:13:57.408 "uuid": "aad762ba-19bb-4be4-a024-97c75ba0a3ba", 00:13:57.408 "strip_size_kb": 64, 00:13:57.408 "state": "configuring", 00:13:57.408 "raid_level": "raid5f", 00:13:57.408 "superblock": true, 00:13:57.408 "num_base_bdevs": 3, 00:13:57.408 "num_base_bdevs_discovered": 1, 00:13:57.408 "num_base_bdevs_operational": 3, 00:13:57.408 "base_bdevs_list": [ 00:13:57.408 { 00:13:57.408 "name": "BaseBdev1", 00:13:57.408 "uuid": "a77bee19-b2f4-4ba6-bb82-461a4faadeb0", 00:13:57.408 "is_configured": true, 00:13:57.408 "data_offset": 2048, 00:13:57.408 "data_size": 63488 00:13:57.408 }, 00:13:57.408 { 00:13:57.408 "name": "BaseBdev2", 00:13:57.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.408 "is_configured": false, 00:13:57.408 "data_offset": 0, 00:13:57.408 "data_size": 0 00:13:57.408 }, 00:13:57.408 { 00:13:57.408 "name": "BaseBdev3", 00:13:57.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.408 "is_configured": false, 00:13:57.408 "data_offset": 0, 00:13:57.408 "data_size": 0 00:13:57.408 } 00:13:57.408 ] 00:13:57.408 }' 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.408 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.977 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:57.977 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.977 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.977 [2024-12-07 02:47:08.772158] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:57.977 [2024-12-07 02:47:08.772239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:13:57.977 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.977 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.978 [2024-12-07 02:47:08.780187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.978 [2024-12-07 02:47:08.782283] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:57.978 [2024-12-07 02:47:08.782354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:57.978 [2024-12-07 02:47:08.782380] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:57.978 [2024-12-07 02:47:08.782401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.978 "name": "Existed_Raid", 00:13:57.978 "uuid": "14dea6c6-5840-4120-be91-7234f274b1eb", 00:13:57.978 "strip_size_kb": 64, 00:13:57.978 "state": "configuring", 00:13:57.978 "raid_level": "raid5f", 00:13:57.978 "superblock": true, 00:13:57.978 "num_base_bdevs": 3, 00:13:57.978 "num_base_bdevs_discovered": 1, 00:13:57.978 "num_base_bdevs_operational": 3, 00:13:57.978 "base_bdevs_list": [ 00:13:57.978 { 00:13:57.978 "name": "BaseBdev1", 00:13:57.978 "uuid": "a77bee19-b2f4-4ba6-bb82-461a4faadeb0", 00:13:57.978 "is_configured": true, 00:13:57.978 "data_offset": 2048, 00:13:57.978 "data_size": 63488 00:13:57.978 }, 00:13:57.978 { 00:13:57.978 "name": "BaseBdev2", 00:13:57.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.978 "is_configured": false, 00:13:57.978 "data_offset": 0, 00:13:57.978 "data_size": 0 00:13:57.978 }, 00:13:57.978 { 00:13:57.978 "name": "BaseBdev3", 00:13:57.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.978 "is_configured": false, 00:13:57.978 "data_offset": 0, 00:13:57.978 "data_size": 0 00:13:57.978 } 00:13:57.978 ] 00:13:57.978 }' 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.978 02:47:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.236 [2024-12-07 02:47:09.286723] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.236 BaseBdev2 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.236 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.495 [ 00:13:58.495 { 00:13:58.495 "name": "BaseBdev2", 00:13:58.495 "aliases": [ 00:13:58.495 "70d80bb4-dbb3-4f46-a26e-c657fbb14066" 00:13:58.495 ], 00:13:58.495 "product_name": "Malloc disk", 00:13:58.495 "block_size": 512, 00:13:58.495 "num_blocks": 65536, 00:13:58.495 "uuid": "70d80bb4-dbb3-4f46-a26e-c657fbb14066", 00:13:58.495 "assigned_rate_limits": { 00:13:58.495 "rw_ios_per_sec": 0, 00:13:58.495 "rw_mbytes_per_sec": 0, 00:13:58.495 "r_mbytes_per_sec": 0, 00:13:58.495 "w_mbytes_per_sec": 0 00:13:58.495 }, 00:13:58.495 "claimed": true, 00:13:58.495 "claim_type": "exclusive_write", 00:13:58.495 "zoned": false, 00:13:58.495 "supported_io_types": { 00:13:58.495 "read": true, 00:13:58.495 "write": true, 00:13:58.495 "unmap": true, 00:13:58.495 "flush": true, 00:13:58.495 "reset": true, 00:13:58.495 "nvme_admin": false, 00:13:58.495 "nvme_io": false, 00:13:58.495 "nvme_io_md": false, 00:13:58.495 "write_zeroes": true, 00:13:58.495 "zcopy": true, 00:13:58.495 "get_zone_info": false, 00:13:58.495 "zone_management": false, 00:13:58.495 "zone_append": false, 00:13:58.495 "compare": false, 00:13:58.495 "compare_and_write": false, 00:13:58.495 "abort": true, 00:13:58.495 "seek_hole": false, 00:13:58.495 "seek_data": false, 00:13:58.495 "copy": true, 00:13:58.495 "nvme_iov_md": false 00:13:58.495 }, 00:13:58.495 "memory_domains": [ 00:13:58.495 { 00:13:58.495 "dma_device_id": "system", 00:13:58.495 "dma_device_type": 1 00:13:58.495 }, 00:13:58.495 { 00:13:58.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.495 "dma_device_type": 2 00:13:58.495 } 00:13:58.495 ], 00:13:58.495 "driver_specific": {} 00:13:58.495 } 00:13:58.495 ] 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.495 "name": "Existed_Raid", 00:13:58.495 "uuid": "14dea6c6-5840-4120-be91-7234f274b1eb", 00:13:58.495 "strip_size_kb": 64, 00:13:58.495 "state": "configuring", 00:13:58.495 "raid_level": "raid5f", 00:13:58.495 "superblock": true, 00:13:58.495 "num_base_bdevs": 3, 00:13:58.495 "num_base_bdevs_discovered": 2, 00:13:58.495 "num_base_bdevs_operational": 3, 00:13:58.495 "base_bdevs_list": [ 00:13:58.495 { 00:13:58.495 "name": "BaseBdev1", 00:13:58.495 "uuid": "a77bee19-b2f4-4ba6-bb82-461a4faadeb0", 00:13:58.495 "is_configured": true, 00:13:58.495 "data_offset": 2048, 00:13:58.495 "data_size": 63488 00:13:58.495 }, 00:13:58.495 { 00:13:58.495 "name": "BaseBdev2", 00:13:58.495 "uuid": "70d80bb4-dbb3-4f46-a26e-c657fbb14066", 00:13:58.495 "is_configured": true, 00:13:58.495 "data_offset": 2048, 00:13:58.495 "data_size": 63488 00:13:58.495 }, 00:13:58.495 { 00:13:58.495 "name": "BaseBdev3", 00:13:58.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.495 "is_configured": false, 00:13:58.495 "data_offset": 0, 00:13:58.495 "data_size": 0 00:13:58.495 } 00:13:58.495 ] 00:13:58.495 }' 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.495 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.754 [2024-12-07 02:47:09.790550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:58.754 [2024-12-07 02:47:09.790807] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:13:58.754 [2024-12-07 02:47:09.790827] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:58.754 [2024-12-07 02:47:09.791155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:13:58.754 [2024-12-07 02:47:09.791654] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:13:58.754 [2024-12-07 02:47:09.791673] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:13:58.754 [2024-12-07 02:47:09.791797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.754 BaseBdev3 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.754 [ 00:13:58.754 { 00:13:58.754 "name": "BaseBdev3", 00:13:58.754 "aliases": [ 00:13:58.754 "a1d6ee0e-08a1-4c95-b4ef-819be6f2d663" 00:13:58.754 ], 00:13:58.754 "product_name": "Malloc disk", 00:13:58.754 "block_size": 512, 00:13:58.754 "num_blocks": 65536, 00:13:58.754 "uuid": "a1d6ee0e-08a1-4c95-b4ef-819be6f2d663", 00:13:58.754 "assigned_rate_limits": { 00:13:58.754 "rw_ios_per_sec": 0, 00:13:58.754 "rw_mbytes_per_sec": 0, 00:13:58.754 "r_mbytes_per_sec": 0, 00:13:58.754 "w_mbytes_per_sec": 0 00:13:58.754 }, 00:13:58.754 "claimed": true, 00:13:58.754 "claim_type": "exclusive_write", 00:13:58.754 "zoned": false, 00:13:58.754 "supported_io_types": { 00:13:58.754 "read": true, 00:13:58.754 "write": true, 00:13:58.754 "unmap": true, 00:13:58.754 "flush": true, 00:13:58.754 "reset": true, 00:13:58.754 "nvme_admin": false, 00:13:58.754 "nvme_io": false, 00:13:58.754 "nvme_io_md": false, 00:13:58.754 "write_zeroes": true, 00:13:58.754 "zcopy": true, 00:13:58.754 "get_zone_info": false, 00:13:58.754 "zone_management": false, 00:13:58.754 "zone_append": false, 00:13:58.754 "compare": false, 00:13:58.754 "compare_and_write": false, 00:13:58.754 "abort": true, 00:13:58.754 "seek_hole": false, 00:13:58.754 "seek_data": false, 00:13:58.754 "copy": true, 00:13:58.754 "nvme_iov_md": false 00:13:58.754 }, 00:13:58.754 "memory_domains": [ 00:13:58.754 { 00:13:58.754 "dma_device_id": "system", 00:13:58.754 "dma_device_type": 1 00:13:58.754 }, 00:13:58.754 { 00:13:58.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.754 "dma_device_type": 2 00:13:58.754 } 00:13:58.754 ], 00:13:58.754 "driver_specific": {} 00:13:58.754 } 00:13:58.754 ] 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.754 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.755 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.755 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.755 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.755 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.755 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.755 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.755 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.013 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.013 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.013 "name": "Existed_Raid", 00:13:59.013 "uuid": "14dea6c6-5840-4120-be91-7234f274b1eb", 00:13:59.013 "strip_size_kb": 64, 00:13:59.013 "state": "online", 00:13:59.013 "raid_level": "raid5f", 00:13:59.013 "superblock": true, 00:13:59.013 "num_base_bdevs": 3, 00:13:59.013 "num_base_bdevs_discovered": 3, 00:13:59.013 "num_base_bdevs_operational": 3, 00:13:59.013 "base_bdevs_list": [ 00:13:59.013 { 00:13:59.013 "name": "BaseBdev1", 00:13:59.013 "uuid": "a77bee19-b2f4-4ba6-bb82-461a4faadeb0", 00:13:59.013 "is_configured": true, 00:13:59.013 "data_offset": 2048, 00:13:59.013 "data_size": 63488 00:13:59.013 }, 00:13:59.013 { 00:13:59.013 "name": "BaseBdev2", 00:13:59.013 "uuid": "70d80bb4-dbb3-4f46-a26e-c657fbb14066", 00:13:59.013 "is_configured": true, 00:13:59.013 "data_offset": 2048, 00:13:59.013 "data_size": 63488 00:13:59.013 }, 00:13:59.013 { 00:13:59.013 "name": "BaseBdev3", 00:13:59.013 "uuid": "a1d6ee0e-08a1-4c95-b4ef-819be6f2d663", 00:13:59.013 "is_configured": true, 00:13:59.013 "data_offset": 2048, 00:13:59.013 "data_size": 63488 00:13:59.013 } 00:13:59.013 ] 00:13:59.013 }' 00:13:59.013 02:47:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.013 02:47:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:59.271 [2024-12-07 02:47:10.241940] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:59.271 "name": "Existed_Raid", 00:13:59.271 "aliases": [ 00:13:59.271 "14dea6c6-5840-4120-be91-7234f274b1eb" 00:13:59.271 ], 00:13:59.271 "product_name": "Raid Volume", 00:13:59.271 "block_size": 512, 00:13:59.271 "num_blocks": 126976, 00:13:59.271 "uuid": "14dea6c6-5840-4120-be91-7234f274b1eb", 00:13:59.271 "assigned_rate_limits": { 00:13:59.271 "rw_ios_per_sec": 0, 00:13:59.271 "rw_mbytes_per_sec": 0, 00:13:59.271 "r_mbytes_per_sec": 0, 00:13:59.271 "w_mbytes_per_sec": 0 00:13:59.271 }, 00:13:59.271 "claimed": false, 00:13:59.271 "zoned": false, 00:13:59.271 "supported_io_types": { 00:13:59.271 "read": true, 00:13:59.271 "write": true, 00:13:59.271 "unmap": false, 00:13:59.271 "flush": false, 00:13:59.271 "reset": true, 00:13:59.271 "nvme_admin": false, 00:13:59.271 "nvme_io": false, 00:13:59.271 "nvme_io_md": false, 00:13:59.271 "write_zeroes": true, 00:13:59.271 "zcopy": false, 00:13:59.271 "get_zone_info": false, 00:13:59.271 "zone_management": false, 00:13:59.271 "zone_append": false, 00:13:59.271 "compare": false, 00:13:59.271 "compare_and_write": false, 00:13:59.271 "abort": false, 00:13:59.271 "seek_hole": false, 00:13:59.271 "seek_data": false, 00:13:59.271 "copy": false, 00:13:59.271 "nvme_iov_md": false 00:13:59.271 }, 00:13:59.271 "driver_specific": { 00:13:59.271 "raid": { 00:13:59.271 "uuid": "14dea6c6-5840-4120-be91-7234f274b1eb", 00:13:59.271 "strip_size_kb": 64, 00:13:59.271 "state": "online", 00:13:59.271 "raid_level": "raid5f", 00:13:59.271 "superblock": true, 00:13:59.271 "num_base_bdevs": 3, 00:13:59.271 "num_base_bdevs_discovered": 3, 00:13:59.271 "num_base_bdevs_operational": 3, 00:13:59.271 "base_bdevs_list": [ 00:13:59.271 { 00:13:59.271 "name": "BaseBdev1", 00:13:59.271 "uuid": "a77bee19-b2f4-4ba6-bb82-461a4faadeb0", 00:13:59.271 "is_configured": true, 00:13:59.271 "data_offset": 2048, 00:13:59.271 "data_size": 63488 00:13:59.271 }, 00:13:59.271 { 00:13:59.271 "name": "BaseBdev2", 00:13:59.271 "uuid": "70d80bb4-dbb3-4f46-a26e-c657fbb14066", 00:13:59.271 "is_configured": true, 00:13:59.271 "data_offset": 2048, 00:13:59.271 "data_size": 63488 00:13:59.271 }, 00:13:59.271 { 00:13:59.271 "name": "BaseBdev3", 00:13:59.271 "uuid": "a1d6ee0e-08a1-4c95-b4ef-819be6f2d663", 00:13:59.271 "is_configured": true, 00:13:59.271 "data_offset": 2048, 00:13:59.271 "data_size": 63488 00:13:59.271 } 00:13:59.271 ] 00:13:59.271 } 00:13:59.271 } 00:13:59.271 }' 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:59.271 BaseBdev2 00:13:59.271 BaseBdev3' 00:13:59.271 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.530 [2024-12-07 02:47:10.513385] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:59.530 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.531 "name": "Existed_Raid", 00:13:59.531 "uuid": "14dea6c6-5840-4120-be91-7234f274b1eb", 00:13:59.531 "strip_size_kb": 64, 00:13:59.531 "state": "online", 00:13:59.531 "raid_level": "raid5f", 00:13:59.531 "superblock": true, 00:13:59.531 "num_base_bdevs": 3, 00:13:59.531 "num_base_bdevs_discovered": 2, 00:13:59.531 "num_base_bdevs_operational": 2, 00:13:59.531 "base_bdevs_list": [ 00:13:59.531 { 00:13:59.531 "name": null, 00:13:59.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.531 "is_configured": false, 00:13:59.531 "data_offset": 0, 00:13:59.531 "data_size": 63488 00:13:59.531 }, 00:13:59.531 { 00:13:59.531 "name": "BaseBdev2", 00:13:59.531 "uuid": "70d80bb4-dbb3-4f46-a26e-c657fbb14066", 00:13:59.531 "is_configured": true, 00:13:59.531 "data_offset": 2048, 00:13:59.531 "data_size": 63488 00:13:59.531 }, 00:13:59.531 { 00:13:59.531 "name": "BaseBdev3", 00:13:59.531 "uuid": "a1d6ee0e-08a1-4c95-b4ef-819be6f2d663", 00:13:59.531 "is_configured": true, 00:13:59.531 "data_offset": 2048, 00:13:59.531 "data_size": 63488 00:13:59.531 } 00:13:59.531 ] 00:13:59.531 }' 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.531 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.099 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:00.099 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:00.099 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.099 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.099 02:47:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:00.099 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.099 02:47:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.099 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:00.099 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.100 [2024-12-07 02:47:11.029167] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:00.100 [2024-12-07 02:47:11.029371] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.100 [2024-12-07 02:47:11.049756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.100 [2024-12-07 02:47:11.109693] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:00.100 [2024-12-07 02:47:11.109789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.100 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.359 BaseBdev2 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.359 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.359 [ 00:14:00.359 { 00:14:00.360 "name": "BaseBdev2", 00:14:00.360 "aliases": [ 00:14:00.360 "ef5ca55d-d918-4f38-a28c-84be0aeb2c53" 00:14:00.360 ], 00:14:00.360 "product_name": "Malloc disk", 00:14:00.360 "block_size": 512, 00:14:00.360 "num_blocks": 65536, 00:14:00.360 "uuid": "ef5ca55d-d918-4f38-a28c-84be0aeb2c53", 00:14:00.360 "assigned_rate_limits": { 00:14:00.360 "rw_ios_per_sec": 0, 00:14:00.360 "rw_mbytes_per_sec": 0, 00:14:00.360 "r_mbytes_per_sec": 0, 00:14:00.360 "w_mbytes_per_sec": 0 00:14:00.360 }, 00:14:00.360 "claimed": false, 00:14:00.360 "zoned": false, 00:14:00.360 "supported_io_types": { 00:14:00.360 "read": true, 00:14:00.360 "write": true, 00:14:00.360 "unmap": true, 00:14:00.360 "flush": true, 00:14:00.360 "reset": true, 00:14:00.360 "nvme_admin": false, 00:14:00.360 "nvme_io": false, 00:14:00.360 "nvme_io_md": false, 00:14:00.360 "write_zeroes": true, 00:14:00.360 "zcopy": true, 00:14:00.360 "get_zone_info": false, 00:14:00.360 "zone_management": false, 00:14:00.360 "zone_append": false, 00:14:00.360 "compare": false, 00:14:00.360 "compare_and_write": false, 00:14:00.360 "abort": true, 00:14:00.360 "seek_hole": false, 00:14:00.360 "seek_data": false, 00:14:00.360 "copy": true, 00:14:00.360 "nvme_iov_md": false 00:14:00.360 }, 00:14:00.360 "memory_domains": [ 00:14:00.360 { 00:14:00.360 "dma_device_id": "system", 00:14:00.360 "dma_device_type": 1 00:14:00.360 }, 00:14:00.360 { 00:14:00.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.360 "dma_device_type": 2 00:14:00.360 } 00:14:00.360 ], 00:14:00.360 "driver_specific": {} 00:14:00.360 } 00:14:00.360 ] 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.360 BaseBdev3 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.360 [ 00:14:00.360 { 00:14:00.360 "name": "BaseBdev3", 00:14:00.360 "aliases": [ 00:14:00.360 "5be3295b-2bb2-488c-9f65-427b0fc24037" 00:14:00.360 ], 00:14:00.360 "product_name": "Malloc disk", 00:14:00.360 "block_size": 512, 00:14:00.360 "num_blocks": 65536, 00:14:00.360 "uuid": "5be3295b-2bb2-488c-9f65-427b0fc24037", 00:14:00.360 "assigned_rate_limits": { 00:14:00.360 "rw_ios_per_sec": 0, 00:14:00.360 "rw_mbytes_per_sec": 0, 00:14:00.360 "r_mbytes_per_sec": 0, 00:14:00.360 "w_mbytes_per_sec": 0 00:14:00.360 }, 00:14:00.360 "claimed": false, 00:14:00.360 "zoned": false, 00:14:00.360 "supported_io_types": { 00:14:00.360 "read": true, 00:14:00.360 "write": true, 00:14:00.360 "unmap": true, 00:14:00.360 "flush": true, 00:14:00.360 "reset": true, 00:14:00.360 "nvme_admin": false, 00:14:00.360 "nvme_io": false, 00:14:00.360 "nvme_io_md": false, 00:14:00.360 "write_zeroes": true, 00:14:00.360 "zcopy": true, 00:14:00.360 "get_zone_info": false, 00:14:00.360 "zone_management": false, 00:14:00.360 "zone_append": false, 00:14:00.360 "compare": false, 00:14:00.360 "compare_and_write": false, 00:14:00.360 "abort": true, 00:14:00.360 "seek_hole": false, 00:14:00.360 "seek_data": false, 00:14:00.360 "copy": true, 00:14:00.360 "nvme_iov_md": false 00:14:00.360 }, 00:14:00.360 "memory_domains": [ 00:14:00.360 { 00:14:00.360 "dma_device_id": "system", 00:14:00.360 "dma_device_type": 1 00:14:00.360 }, 00:14:00.360 { 00:14:00.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.360 "dma_device_type": 2 00:14:00.360 } 00:14:00.360 ], 00:14:00.360 "driver_specific": {} 00:14:00.360 } 00:14:00.360 ] 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.360 [2024-12-07 02:47:11.305070] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.360 [2024-12-07 02:47:11.305157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.360 [2024-12-07 02:47:11.305202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.360 [2024-12-07 02:47:11.307238] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.360 "name": "Existed_Raid", 00:14:00.360 "uuid": "032a1542-33eb-448a-8b89-d8f772237a8e", 00:14:00.360 "strip_size_kb": 64, 00:14:00.360 "state": "configuring", 00:14:00.360 "raid_level": "raid5f", 00:14:00.360 "superblock": true, 00:14:00.360 "num_base_bdevs": 3, 00:14:00.360 "num_base_bdevs_discovered": 2, 00:14:00.360 "num_base_bdevs_operational": 3, 00:14:00.360 "base_bdevs_list": [ 00:14:00.360 { 00:14:00.360 "name": "BaseBdev1", 00:14:00.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.360 "is_configured": false, 00:14:00.360 "data_offset": 0, 00:14:00.360 "data_size": 0 00:14:00.360 }, 00:14:00.360 { 00:14:00.360 "name": "BaseBdev2", 00:14:00.360 "uuid": "ef5ca55d-d918-4f38-a28c-84be0aeb2c53", 00:14:00.360 "is_configured": true, 00:14:00.360 "data_offset": 2048, 00:14:00.360 "data_size": 63488 00:14:00.360 }, 00:14:00.360 { 00:14:00.360 "name": "BaseBdev3", 00:14:00.360 "uuid": "5be3295b-2bb2-488c-9f65-427b0fc24037", 00:14:00.360 "is_configured": true, 00:14:00.360 "data_offset": 2048, 00:14:00.360 "data_size": 63488 00:14:00.360 } 00:14:00.360 ] 00:14:00.360 }' 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.360 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.929 [2024-12-07 02:47:11.736245] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.929 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.929 "name": "Existed_Raid", 00:14:00.929 "uuid": "032a1542-33eb-448a-8b89-d8f772237a8e", 00:14:00.929 "strip_size_kb": 64, 00:14:00.929 "state": "configuring", 00:14:00.929 "raid_level": "raid5f", 00:14:00.929 "superblock": true, 00:14:00.929 "num_base_bdevs": 3, 00:14:00.929 "num_base_bdevs_discovered": 1, 00:14:00.929 "num_base_bdevs_operational": 3, 00:14:00.929 "base_bdevs_list": [ 00:14:00.929 { 00:14:00.929 "name": "BaseBdev1", 00:14:00.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.929 "is_configured": false, 00:14:00.929 "data_offset": 0, 00:14:00.929 "data_size": 0 00:14:00.929 }, 00:14:00.929 { 00:14:00.929 "name": null, 00:14:00.930 "uuid": "ef5ca55d-d918-4f38-a28c-84be0aeb2c53", 00:14:00.930 "is_configured": false, 00:14:00.930 "data_offset": 0, 00:14:00.930 "data_size": 63488 00:14:00.930 }, 00:14:00.930 { 00:14:00.930 "name": "BaseBdev3", 00:14:00.930 "uuid": "5be3295b-2bb2-488c-9f65-427b0fc24037", 00:14:00.930 "is_configured": true, 00:14:00.930 "data_offset": 2048, 00:14:00.930 "data_size": 63488 00:14:00.930 } 00:14:00.930 ] 00:14:00.930 }' 00:14:00.930 02:47:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.930 02:47:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.189 [2024-12-07 02:47:12.156080] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.189 BaseBdev1 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.189 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.189 [ 00:14:01.189 { 00:14:01.189 "name": "BaseBdev1", 00:14:01.189 "aliases": [ 00:14:01.189 "de1050e5-3f28-4485-bdf7-bad2b4e3b61d" 00:14:01.189 ], 00:14:01.189 "product_name": "Malloc disk", 00:14:01.189 "block_size": 512, 00:14:01.189 "num_blocks": 65536, 00:14:01.189 "uuid": "de1050e5-3f28-4485-bdf7-bad2b4e3b61d", 00:14:01.189 "assigned_rate_limits": { 00:14:01.189 "rw_ios_per_sec": 0, 00:14:01.189 "rw_mbytes_per_sec": 0, 00:14:01.189 "r_mbytes_per_sec": 0, 00:14:01.189 "w_mbytes_per_sec": 0 00:14:01.189 }, 00:14:01.189 "claimed": true, 00:14:01.189 "claim_type": "exclusive_write", 00:14:01.189 "zoned": false, 00:14:01.189 "supported_io_types": { 00:14:01.189 "read": true, 00:14:01.189 "write": true, 00:14:01.189 "unmap": true, 00:14:01.189 "flush": true, 00:14:01.189 "reset": true, 00:14:01.189 "nvme_admin": false, 00:14:01.189 "nvme_io": false, 00:14:01.189 "nvme_io_md": false, 00:14:01.189 "write_zeroes": true, 00:14:01.189 "zcopy": true, 00:14:01.189 "get_zone_info": false, 00:14:01.189 "zone_management": false, 00:14:01.189 "zone_append": false, 00:14:01.189 "compare": false, 00:14:01.189 "compare_and_write": false, 00:14:01.189 "abort": true, 00:14:01.189 "seek_hole": false, 00:14:01.189 "seek_data": false, 00:14:01.189 "copy": true, 00:14:01.189 "nvme_iov_md": false 00:14:01.189 }, 00:14:01.189 "memory_domains": [ 00:14:01.190 { 00:14:01.190 "dma_device_id": "system", 00:14:01.190 "dma_device_type": 1 00:14:01.190 }, 00:14:01.190 { 00:14:01.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.190 "dma_device_type": 2 00:14:01.190 } 00:14:01.190 ], 00:14:01.190 "driver_specific": {} 00:14:01.190 } 00:14:01.190 ] 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.190 "name": "Existed_Raid", 00:14:01.190 "uuid": "032a1542-33eb-448a-8b89-d8f772237a8e", 00:14:01.190 "strip_size_kb": 64, 00:14:01.190 "state": "configuring", 00:14:01.190 "raid_level": "raid5f", 00:14:01.190 "superblock": true, 00:14:01.190 "num_base_bdevs": 3, 00:14:01.190 "num_base_bdevs_discovered": 2, 00:14:01.190 "num_base_bdevs_operational": 3, 00:14:01.190 "base_bdevs_list": [ 00:14:01.190 { 00:14:01.190 "name": "BaseBdev1", 00:14:01.190 "uuid": "de1050e5-3f28-4485-bdf7-bad2b4e3b61d", 00:14:01.190 "is_configured": true, 00:14:01.190 "data_offset": 2048, 00:14:01.190 "data_size": 63488 00:14:01.190 }, 00:14:01.190 { 00:14:01.190 "name": null, 00:14:01.190 "uuid": "ef5ca55d-d918-4f38-a28c-84be0aeb2c53", 00:14:01.190 "is_configured": false, 00:14:01.190 "data_offset": 0, 00:14:01.190 "data_size": 63488 00:14:01.190 }, 00:14:01.190 { 00:14:01.190 "name": "BaseBdev3", 00:14:01.190 "uuid": "5be3295b-2bb2-488c-9f65-427b0fc24037", 00:14:01.190 "is_configured": true, 00:14:01.190 "data_offset": 2048, 00:14:01.190 "data_size": 63488 00:14:01.190 } 00:14:01.190 ] 00:14:01.190 }' 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.190 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.764 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.764 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.764 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.764 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.765 [2024-12-07 02:47:12.663355] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.765 "name": "Existed_Raid", 00:14:01.765 "uuid": "032a1542-33eb-448a-8b89-d8f772237a8e", 00:14:01.765 "strip_size_kb": 64, 00:14:01.765 "state": "configuring", 00:14:01.765 "raid_level": "raid5f", 00:14:01.765 "superblock": true, 00:14:01.765 "num_base_bdevs": 3, 00:14:01.765 "num_base_bdevs_discovered": 1, 00:14:01.765 "num_base_bdevs_operational": 3, 00:14:01.765 "base_bdevs_list": [ 00:14:01.765 { 00:14:01.765 "name": "BaseBdev1", 00:14:01.765 "uuid": "de1050e5-3f28-4485-bdf7-bad2b4e3b61d", 00:14:01.765 "is_configured": true, 00:14:01.765 "data_offset": 2048, 00:14:01.765 "data_size": 63488 00:14:01.765 }, 00:14:01.765 { 00:14:01.765 "name": null, 00:14:01.765 "uuid": "ef5ca55d-d918-4f38-a28c-84be0aeb2c53", 00:14:01.765 "is_configured": false, 00:14:01.765 "data_offset": 0, 00:14:01.765 "data_size": 63488 00:14:01.765 }, 00:14:01.765 { 00:14:01.765 "name": null, 00:14:01.765 "uuid": "5be3295b-2bb2-488c-9f65-427b0fc24037", 00:14:01.765 "is_configured": false, 00:14:01.765 "data_offset": 0, 00:14:01.765 "data_size": 63488 00:14:01.765 } 00:14:01.765 ] 00:14:01.765 }' 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.765 02:47:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.330 [2024-12-07 02:47:13.162528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.330 "name": "Existed_Raid", 00:14:02.330 "uuid": "032a1542-33eb-448a-8b89-d8f772237a8e", 00:14:02.330 "strip_size_kb": 64, 00:14:02.330 "state": "configuring", 00:14:02.330 "raid_level": "raid5f", 00:14:02.330 "superblock": true, 00:14:02.330 "num_base_bdevs": 3, 00:14:02.330 "num_base_bdevs_discovered": 2, 00:14:02.330 "num_base_bdevs_operational": 3, 00:14:02.330 "base_bdevs_list": [ 00:14:02.330 { 00:14:02.330 "name": "BaseBdev1", 00:14:02.330 "uuid": "de1050e5-3f28-4485-bdf7-bad2b4e3b61d", 00:14:02.330 "is_configured": true, 00:14:02.330 "data_offset": 2048, 00:14:02.330 "data_size": 63488 00:14:02.330 }, 00:14:02.330 { 00:14:02.330 "name": null, 00:14:02.330 "uuid": "ef5ca55d-d918-4f38-a28c-84be0aeb2c53", 00:14:02.330 "is_configured": false, 00:14:02.330 "data_offset": 0, 00:14:02.330 "data_size": 63488 00:14:02.330 }, 00:14:02.330 { 00:14:02.330 "name": "BaseBdev3", 00:14:02.330 "uuid": "5be3295b-2bb2-488c-9f65-427b0fc24037", 00:14:02.330 "is_configured": true, 00:14:02.330 "data_offset": 2048, 00:14:02.330 "data_size": 63488 00:14:02.330 } 00:14:02.330 ] 00:14:02.330 }' 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.330 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.588 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.588 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.588 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.588 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:02.588 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.588 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:02.588 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:02.588 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.588 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.588 [2024-12-07 02:47:13.653683] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.846 "name": "Existed_Raid", 00:14:02.846 "uuid": "032a1542-33eb-448a-8b89-d8f772237a8e", 00:14:02.846 "strip_size_kb": 64, 00:14:02.846 "state": "configuring", 00:14:02.846 "raid_level": "raid5f", 00:14:02.846 "superblock": true, 00:14:02.846 "num_base_bdevs": 3, 00:14:02.846 "num_base_bdevs_discovered": 1, 00:14:02.846 "num_base_bdevs_operational": 3, 00:14:02.846 "base_bdevs_list": [ 00:14:02.846 { 00:14:02.846 "name": null, 00:14:02.846 "uuid": "de1050e5-3f28-4485-bdf7-bad2b4e3b61d", 00:14:02.846 "is_configured": false, 00:14:02.846 "data_offset": 0, 00:14:02.846 "data_size": 63488 00:14:02.846 }, 00:14:02.846 { 00:14:02.846 "name": null, 00:14:02.846 "uuid": "ef5ca55d-d918-4f38-a28c-84be0aeb2c53", 00:14:02.846 "is_configured": false, 00:14:02.846 "data_offset": 0, 00:14:02.846 "data_size": 63488 00:14:02.846 }, 00:14:02.846 { 00:14:02.846 "name": "BaseBdev3", 00:14:02.846 "uuid": "5be3295b-2bb2-488c-9f65-427b0fc24037", 00:14:02.846 "is_configured": true, 00:14:02.846 "data_offset": 2048, 00:14:02.846 "data_size": 63488 00:14:02.846 } 00:14:02.846 ] 00:14:02.846 }' 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.846 02:47:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.105 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:03.105 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.105 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.105 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.105 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.105 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:03.105 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:03.105 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.105 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.363 [2024-12-07 02:47:14.184784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:03.363 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.363 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:03.363 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.363 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.363 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.363 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.364 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.364 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.364 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.364 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.364 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.364 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.364 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.364 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.364 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.364 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.364 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.364 "name": "Existed_Raid", 00:14:03.364 "uuid": "032a1542-33eb-448a-8b89-d8f772237a8e", 00:14:03.364 "strip_size_kb": 64, 00:14:03.364 "state": "configuring", 00:14:03.364 "raid_level": "raid5f", 00:14:03.364 "superblock": true, 00:14:03.364 "num_base_bdevs": 3, 00:14:03.364 "num_base_bdevs_discovered": 2, 00:14:03.364 "num_base_bdevs_operational": 3, 00:14:03.364 "base_bdevs_list": [ 00:14:03.364 { 00:14:03.364 "name": null, 00:14:03.364 "uuid": "de1050e5-3f28-4485-bdf7-bad2b4e3b61d", 00:14:03.364 "is_configured": false, 00:14:03.364 "data_offset": 0, 00:14:03.364 "data_size": 63488 00:14:03.364 }, 00:14:03.364 { 00:14:03.364 "name": "BaseBdev2", 00:14:03.364 "uuid": "ef5ca55d-d918-4f38-a28c-84be0aeb2c53", 00:14:03.364 "is_configured": true, 00:14:03.364 "data_offset": 2048, 00:14:03.364 "data_size": 63488 00:14:03.364 }, 00:14:03.364 { 00:14:03.364 "name": "BaseBdev3", 00:14:03.364 "uuid": "5be3295b-2bb2-488c-9f65-427b0fc24037", 00:14:03.364 "is_configured": true, 00:14:03.364 "data_offset": 2048, 00:14:03.364 "data_size": 63488 00:14:03.364 } 00:14:03.364 ] 00:14:03.364 }' 00:14:03.364 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.364 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.623 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.623 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:03.623 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.623 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u de1050e5-3f28-4485-bdf7-bad2b4e3b61d 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.882 [2024-12-07 02:47:14.803185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:03.882 [2024-12-07 02:47:14.803434] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:03.882 [2024-12-07 02:47:14.803456] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:03.882 [2024-12-07 02:47:14.803751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:03.882 NewBaseBdev 00:14:03.882 [2024-12-07 02:47:14.804214] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:03.882 [2024-12-07 02:47:14.804233] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:03.882 [2024-12-07 02:47:14.804351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.882 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.882 [ 00:14:03.882 { 00:14:03.882 "name": "NewBaseBdev", 00:14:03.882 "aliases": [ 00:14:03.882 "de1050e5-3f28-4485-bdf7-bad2b4e3b61d" 00:14:03.882 ], 00:14:03.883 "product_name": "Malloc disk", 00:14:03.883 "block_size": 512, 00:14:03.883 "num_blocks": 65536, 00:14:03.883 "uuid": "de1050e5-3f28-4485-bdf7-bad2b4e3b61d", 00:14:03.883 "assigned_rate_limits": { 00:14:03.883 "rw_ios_per_sec": 0, 00:14:03.883 "rw_mbytes_per_sec": 0, 00:14:03.883 "r_mbytes_per_sec": 0, 00:14:03.883 "w_mbytes_per_sec": 0 00:14:03.883 }, 00:14:03.883 "claimed": true, 00:14:03.883 "claim_type": "exclusive_write", 00:14:03.883 "zoned": false, 00:14:03.883 "supported_io_types": { 00:14:03.883 "read": true, 00:14:03.883 "write": true, 00:14:03.883 "unmap": true, 00:14:03.883 "flush": true, 00:14:03.883 "reset": true, 00:14:03.883 "nvme_admin": false, 00:14:03.883 "nvme_io": false, 00:14:03.883 "nvme_io_md": false, 00:14:03.883 "write_zeroes": true, 00:14:03.883 "zcopy": true, 00:14:03.883 "get_zone_info": false, 00:14:03.883 "zone_management": false, 00:14:03.883 "zone_append": false, 00:14:03.883 "compare": false, 00:14:03.883 "compare_and_write": false, 00:14:03.883 "abort": true, 00:14:03.883 "seek_hole": false, 00:14:03.883 "seek_data": false, 00:14:03.883 "copy": true, 00:14:03.883 "nvme_iov_md": false 00:14:03.883 }, 00:14:03.883 "memory_domains": [ 00:14:03.883 { 00:14:03.883 "dma_device_id": "system", 00:14:03.883 "dma_device_type": 1 00:14:03.883 }, 00:14:03.883 { 00:14:03.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.883 "dma_device_type": 2 00:14:03.883 } 00:14:03.883 ], 00:14:03.883 "driver_specific": {} 00:14:03.883 } 00:14:03.883 ] 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.883 "name": "Existed_Raid", 00:14:03.883 "uuid": "032a1542-33eb-448a-8b89-d8f772237a8e", 00:14:03.883 "strip_size_kb": 64, 00:14:03.883 "state": "online", 00:14:03.883 "raid_level": "raid5f", 00:14:03.883 "superblock": true, 00:14:03.883 "num_base_bdevs": 3, 00:14:03.883 "num_base_bdevs_discovered": 3, 00:14:03.883 "num_base_bdevs_operational": 3, 00:14:03.883 "base_bdevs_list": [ 00:14:03.883 { 00:14:03.883 "name": "NewBaseBdev", 00:14:03.883 "uuid": "de1050e5-3f28-4485-bdf7-bad2b4e3b61d", 00:14:03.883 "is_configured": true, 00:14:03.883 "data_offset": 2048, 00:14:03.883 "data_size": 63488 00:14:03.883 }, 00:14:03.883 { 00:14:03.883 "name": "BaseBdev2", 00:14:03.883 "uuid": "ef5ca55d-d918-4f38-a28c-84be0aeb2c53", 00:14:03.883 "is_configured": true, 00:14:03.883 "data_offset": 2048, 00:14:03.883 "data_size": 63488 00:14:03.883 }, 00:14:03.883 { 00:14:03.883 "name": "BaseBdev3", 00:14:03.883 "uuid": "5be3295b-2bb2-488c-9f65-427b0fc24037", 00:14:03.883 "is_configured": true, 00:14:03.883 "data_offset": 2048, 00:14:03.883 "data_size": 63488 00:14:03.883 } 00:14:03.883 ] 00:14:03.883 }' 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.883 02:47:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.462 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:04.462 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:04.462 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:04.462 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:04.462 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:04.462 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:04.462 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:04.462 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:04.462 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.462 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.462 [2024-12-07 02:47:15.310464] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.462 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.462 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:04.462 "name": "Existed_Raid", 00:14:04.462 "aliases": [ 00:14:04.462 "032a1542-33eb-448a-8b89-d8f772237a8e" 00:14:04.462 ], 00:14:04.462 "product_name": "Raid Volume", 00:14:04.462 "block_size": 512, 00:14:04.462 "num_blocks": 126976, 00:14:04.462 "uuid": "032a1542-33eb-448a-8b89-d8f772237a8e", 00:14:04.462 "assigned_rate_limits": { 00:14:04.462 "rw_ios_per_sec": 0, 00:14:04.462 "rw_mbytes_per_sec": 0, 00:14:04.462 "r_mbytes_per_sec": 0, 00:14:04.462 "w_mbytes_per_sec": 0 00:14:04.462 }, 00:14:04.462 "claimed": false, 00:14:04.462 "zoned": false, 00:14:04.462 "supported_io_types": { 00:14:04.462 "read": true, 00:14:04.462 "write": true, 00:14:04.462 "unmap": false, 00:14:04.462 "flush": false, 00:14:04.462 "reset": true, 00:14:04.462 "nvme_admin": false, 00:14:04.462 "nvme_io": false, 00:14:04.462 "nvme_io_md": false, 00:14:04.462 "write_zeroes": true, 00:14:04.462 "zcopy": false, 00:14:04.462 "get_zone_info": false, 00:14:04.462 "zone_management": false, 00:14:04.462 "zone_append": false, 00:14:04.462 "compare": false, 00:14:04.462 "compare_and_write": false, 00:14:04.462 "abort": false, 00:14:04.462 "seek_hole": false, 00:14:04.462 "seek_data": false, 00:14:04.462 "copy": false, 00:14:04.462 "nvme_iov_md": false 00:14:04.462 }, 00:14:04.462 "driver_specific": { 00:14:04.462 "raid": { 00:14:04.462 "uuid": "032a1542-33eb-448a-8b89-d8f772237a8e", 00:14:04.462 "strip_size_kb": 64, 00:14:04.462 "state": "online", 00:14:04.462 "raid_level": "raid5f", 00:14:04.462 "superblock": true, 00:14:04.462 "num_base_bdevs": 3, 00:14:04.462 "num_base_bdevs_discovered": 3, 00:14:04.462 "num_base_bdevs_operational": 3, 00:14:04.462 "base_bdevs_list": [ 00:14:04.462 { 00:14:04.462 "name": "NewBaseBdev", 00:14:04.462 "uuid": "de1050e5-3f28-4485-bdf7-bad2b4e3b61d", 00:14:04.462 "is_configured": true, 00:14:04.462 "data_offset": 2048, 00:14:04.462 "data_size": 63488 00:14:04.462 }, 00:14:04.462 { 00:14:04.462 "name": "BaseBdev2", 00:14:04.462 "uuid": "ef5ca55d-d918-4f38-a28c-84be0aeb2c53", 00:14:04.462 "is_configured": true, 00:14:04.462 "data_offset": 2048, 00:14:04.462 "data_size": 63488 00:14:04.462 }, 00:14:04.462 { 00:14:04.462 "name": "BaseBdev3", 00:14:04.462 "uuid": "5be3295b-2bb2-488c-9f65-427b0fc24037", 00:14:04.463 "is_configured": true, 00:14:04.463 "data_offset": 2048, 00:14:04.463 "data_size": 63488 00:14:04.463 } 00:14:04.463 ] 00:14:04.463 } 00:14:04.463 } 00:14:04.463 }' 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:04.463 BaseBdev2 00:14:04.463 BaseBdev3' 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.463 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.722 [2024-12-07 02:47:15.581853] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:04.722 [2024-12-07 02:47:15.581876] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.722 [2024-12-07 02:47:15.581951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.722 [2024-12-07 02:47:15.582210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.722 [2024-12-07 02:47:15.582224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 91320 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 91320 ']' 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 91320 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91320 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:04.722 killing process with pid 91320 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91320' 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 91320 00:14:04.722 [2024-12-07 02:47:15.629451] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.722 02:47:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 91320 00:14:04.722 [2024-12-07 02:47:15.686590] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.292 02:47:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:05.292 00:14:05.292 real 0m9.199s 00:14:05.292 user 0m15.369s 00:14:05.292 sys 0m2.007s 00:14:05.292 02:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.292 ************************************ 00:14:05.292 END TEST raid5f_state_function_test_sb 00:14:05.292 ************************************ 00:14:05.292 02:47:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.292 02:47:16 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:05.292 02:47:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:05.292 02:47:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.292 02:47:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.292 ************************************ 00:14:05.292 START TEST raid5f_superblock_test 00:14:05.292 ************************************ 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91924 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91924 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91924 ']' 00:14:05.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:05.292 02:47:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.292 [2024-12-07 02:47:16.245934] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:05.292 [2024-12-07 02:47:16.246577] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91924 ] 00:14:05.552 [2024-12-07 02:47:16.407270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.552 [2024-12-07 02:47:16.479910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.552 [2024-12-07 02:47:16.556347] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.552 [2024-12-07 02:47:16.556388] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.165 malloc1 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.165 [2024-12-07 02:47:17.121927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:06.165 [2024-12-07 02:47:17.122044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.165 [2024-12-07 02:47:17.122086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:06.165 [2024-12-07 02:47:17.122139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.165 [2024-12-07 02:47:17.124559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.165 [2024-12-07 02:47:17.124646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:06.165 pt1 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.165 malloc2 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.165 [2024-12-07 02:47:17.176688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.165 [2024-12-07 02:47:17.176809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.165 [2024-12-07 02:47:17.176853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:06.165 [2024-12-07 02:47:17.176883] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.165 [2024-12-07 02:47:17.181790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.165 [2024-12-07 02:47:17.181924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.165 pt2 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.165 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.165 malloc3 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.166 [2024-12-07 02:47:17.213877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:06.166 [2024-12-07 02:47:17.213961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.166 [2024-12-07 02:47:17.213996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:06.166 [2024-12-07 02:47:17.214023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.166 [2024-12-07 02:47:17.216386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.166 [2024-12-07 02:47:17.216453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:06.166 pt3 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.166 [2024-12-07 02:47:17.225916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:06.166 [2024-12-07 02:47:17.228003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.166 [2024-12-07 02:47:17.228102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:06.166 [2024-12-07 02:47:17.228294] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:06.166 [2024-12-07 02:47:17.228344] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:06.166 [2024-12-07 02:47:17.228629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:14:06.166 [2024-12-07 02:47:17.229111] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:06.166 [2024-12-07 02:47:17.229162] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:06.166 [2024-12-07 02:47:17.229344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.166 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.430 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.430 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.430 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.430 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.430 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.430 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.430 "name": "raid_bdev1", 00:14:06.430 "uuid": "f7e07858-7cdc-46a8-81f3-039cb3e34579", 00:14:06.430 "strip_size_kb": 64, 00:14:06.430 "state": "online", 00:14:06.430 "raid_level": "raid5f", 00:14:06.430 "superblock": true, 00:14:06.430 "num_base_bdevs": 3, 00:14:06.430 "num_base_bdevs_discovered": 3, 00:14:06.430 "num_base_bdevs_operational": 3, 00:14:06.430 "base_bdevs_list": [ 00:14:06.430 { 00:14:06.430 "name": "pt1", 00:14:06.430 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.430 "is_configured": true, 00:14:06.430 "data_offset": 2048, 00:14:06.430 "data_size": 63488 00:14:06.430 }, 00:14:06.430 { 00:14:06.430 "name": "pt2", 00:14:06.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.430 "is_configured": true, 00:14:06.430 "data_offset": 2048, 00:14:06.430 "data_size": 63488 00:14:06.430 }, 00:14:06.430 { 00:14:06.430 "name": "pt3", 00:14:06.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.430 "is_configured": true, 00:14:06.430 "data_offset": 2048, 00:14:06.430 "data_size": 63488 00:14:06.430 } 00:14:06.430 ] 00:14:06.430 }' 00:14:06.430 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.430 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.689 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:06.689 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:06.689 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:06.689 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:06.689 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:06.689 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:06.689 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.689 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:06.689 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.689 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.689 [2024-12-07 02:47:17.690944] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.689 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.689 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:06.689 "name": "raid_bdev1", 00:14:06.689 "aliases": [ 00:14:06.689 "f7e07858-7cdc-46a8-81f3-039cb3e34579" 00:14:06.689 ], 00:14:06.689 "product_name": "Raid Volume", 00:14:06.689 "block_size": 512, 00:14:06.689 "num_blocks": 126976, 00:14:06.689 "uuid": "f7e07858-7cdc-46a8-81f3-039cb3e34579", 00:14:06.689 "assigned_rate_limits": { 00:14:06.689 "rw_ios_per_sec": 0, 00:14:06.689 "rw_mbytes_per_sec": 0, 00:14:06.689 "r_mbytes_per_sec": 0, 00:14:06.689 "w_mbytes_per_sec": 0 00:14:06.689 }, 00:14:06.689 "claimed": false, 00:14:06.689 "zoned": false, 00:14:06.689 "supported_io_types": { 00:14:06.689 "read": true, 00:14:06.689 "write": true, 00:14:06.689 "unmap": false, 00:14:06.689 "flush": false, 00:14:06.690 "reset": true, 00:14:06.690 "nvme_admin": false, 00:14:06.690 "nvme_io": false, 00:14:06.690 "nvme_io_md": false, 00:14:06.690 "write_zeroes": true, 00:14:06.690 "zcopy": false, 00:14:06.690 "get_zone_info": false, 00:14:06.690 "zone_management": false, 00:14:06.690 "zone_append": false, 00:14:06.690 "compare": false, 00:14:06.690 "compare_and_write": false, 00:14:06.690 "abort": false, 00:14:06.690 "seek_hole": false, 00:14:06.690 "seek_data": false, 00:14:06.690 "copy": false, 00:14:06.690 "nvme_iov_md": false 00:14:06.690 }, 00:14:06.690 "driver_specific": { 00:14:06.690 "raid": { 00:14:06.690 "uuid": "f7e07858-7cdc-46a8-81f3-039cb3e34579", 00:14:06.690 "strip_size_kb": 64, 00:14:06.690 "state": "online", 00:14:06.690 "raid_level": "raid5f", 00:14:06.690 "superblock": true, 00:14:06.690 "num_base_bdevs": 3, 00:14:06.690 "num_base_bdevs_discovered": 3, 00:14:06.690 "num_base_bdevs_operational": 3, 00:14:06.690 "base_bdevs_list": [ 00:14:06.690 { 00:14:06.690 "name": "pt1", 00:14:06.690 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.690 "is_configured": true, 00:14:06.690 "data_offset": 2048, 00:14:06.690 "data_size": 63488 00:14:06.690 }, 00:14:06.690 { 00:14:06.690 "name": "pt2", 00:14:06.690 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.690 "is_configured": true, 00:14:06.690 "data_offset": 2048, 00:14:06.690 "data_size": 63488 00:14:06.690 }, 00:14:06.690 { 00:14:06.690 "name": "pt3", 00:14:06.690 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.690 "is_configured": true, 00:14:06.690 "data_offset": 2048, 00:14:06.690 "data_size": 63488 00:14:06.690 } 00:14:06.690 ] 00:14:06.690 } 00:14:06.690 } 00:14:06.690 }' 00:14:06.690 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:06.949 pt2 00:14:06.949 pt3' 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.949 02:47:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:06.949 [2024-12-07 02:47:17.994390] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.949 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f7e07858-7cdc-46a8-81f3-039cb3e34579 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f7e07858-7cdc-46a8-81f3-039cb3e34579 ']' 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.209 [2024-12-07 02:47:18.046135] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.209 [2024-12-07 02:47:18.046192] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.209 [2024-12-07 02:47:18.046310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.209 [2024-12-07 02:47:18.046391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.209 [2024-12-07 02:47:18.046439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.209 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.209 [2024-12-07 02:47:18.197899] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:07.209 [2024-12-07 02:47:18.199920] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:07.209 [2024-12-07 02:47:18.200011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:07.209 [2024-12-07 02:47:18.200065] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:07.209 [2024-12-07 02:47:18.200102] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:07.209 [2024-12-07 02:47:18.200120] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:07.209 [2024-12-07 02:47:18.200133] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.209 [2024-12-07 02:47:18.200144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:14:07.209 request: 00:14:07.209 { 00:14:07.209 "name": "raid_bdev1", 00:14:07.210 "raid_level": "raid5f", 00:14:07.210 "base_bdevs": [ 00:14:07.210 "malloc1", 00:14:07.210 "malloc2", 00:14:07.210 "malloc3" 00:14:07.210 ], 00:14:07.210 "strip_size_kb": 64, 00:14:07.210 "superblock": false, 00:14:07.210 "method": "bdev_raid_create", 00:14:07.210 "req_id": 1 00:14:07.210 } 00:14:07.210 Got JSON-RPC error response 00:14:07.210 response: 00:14:07.210 { 00:14:07.210 "code": -17, 00:14:07.210 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:07.210 } 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.210 [2024-12-07 02:47:18.265747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:07.210 [2024-12-07 02:47:18.265842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.210 [2024-12-07 02:47:18.265872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:07.210 [2024-12-07 02:47:18.265899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.210 [2024-12-07 02:47:18.268166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.210 [2024-12-07 02:47:18.268250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:07.210 [2024-12-07 02:47:18.268328] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:07.210 [2024-12-07 02:47:18.268381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:07.210 pt1 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.210 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.469 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.469 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.469 "name": "raid_bdev1", 00:14:07.469 "uuid": "f7e07858-7cdc-46a8-81f3-039cb3e34579", 00:14:07.469 "strip_size_kb": 64, 00:14:07.469 "state": "configuring", 00:14:07.469 "raid_level": "raid5f", 00:14:07.469 "superblock": true, 00:14:07.469 "num_base_bdevs": 3, 00:14:07.469 "num_base_bdevs_discovered": 1, 00:14:07.469 "num_base_bdevs_operational": 3, 00:14:07.469 "base_bdevs_list": [ 00:14:07.469 { 00:14:07.469 "name": "pt1", 00:14:07.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:07.469 "is_configured": true, 00:14:07.469 "data_offset": 2048, 00:14:07.469 "data_size": 63488 00:14:07.469 }, 00:14:07.469 { 00:14:07.469 "name": null, 00:14:07.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.469 "is_configured": false, 00:14:07.469 "data_offset": 2048, 00:14:07.469 "data_size": 63488 00:14:07.469 }, 00:14:07.469 { 00:14:07.469 "name": null, 00:14:07.470 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.470 "is_configured": false, 00:14:07.470 "data_offset": 2048, 00:14:07.470 "data_size": 63488 00:14:07.470 } 00:14:07.470 ] 00:14:07.470 }' 00:14:07.470 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.470 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.729 [2024-12-07 02:47:18.728958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:07.729 [2024-12-07 02:47:18.729043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.729 [2024-12-07 02:47:18.729095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:07.729 [2024-12-07 02:47:18.729159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.729 [2024-12-07 02:47:18.729527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.729 [2024-12-07 02:47:18.729593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:07.729 [2024-12-07 02:47:18.729680] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:07.729 [2024-12-07 02:47:18.729731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:07.729 pt2 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.729 [2024-12-07 02:47:18.736972] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.729 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.730 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.730 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.730 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.730 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.730 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.730 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.730 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.730 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.730 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.730 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.730 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.730 "name": "raid_bdev1", 00:14:07.730 "uuid": "f7e07858-7cdc-46a8-81f3-039cb3e34579", 00:14:07.730 "strip_size_kb": 64, 00:14:07.730 "state": "configuring", 00:14:07.730 "raid_level": "raid5f", 00:14:07.730 "superblock": true, 00:14:07.730 "num_base_bdevs": 3, 00:14:07.730 "num_base_bdevs_discovered": 1, 00:14:07.730 "num_base_bdevs_operational": 3, 00:14:07.730 "base_bdevs_list": [ 00:14:07.730 { 00:14:07.730 "name": "pt1", 00:14:07.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:07.730 "is_configured": true, 00:14:07.730 "data_offset": 2048, 00:14:07.730 "data_size": 63488 00:14:07.730 }, 00:14:07.730 { 00:14:07.730 "name": null, 00:14:07.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.730 "is_configured": false, 00:14:07.730 "data_offset": 0, 00:14:07.730 "data_size": 63488 00:14:07.730 }, 00:14:07.730 { 00:14:07.730 "name": null, 00:14:07.730 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.730 "is_configured": false, 00:14:07.730 "data_offset": 2048, 00:14:07.730 "data_size": 63488 00:14:07.730 } 00:14:07.730 ] 00:14:07.730 }' 00:14:07.730 02:47:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.730 02:47:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.298 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:08.298 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:08.298 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:08.298 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.298 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.298 [2024-12-07 02:47:19.188238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:08.299 [2024-12-07 02:47:19.188282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.299 [2024-12-07 02:47:19.188299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:08.299 [2024-12-07 02:47:19.188308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.299 [2024-12-07 02:47:19.188642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.299 [2024-12-07 02:47:19.188668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:08.299 [2024-12-07 02:47:19.188722] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:08.299 [2024-12-07 02:47:19.188740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:08.299 pt2 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.299 [2024-12-07 02:47:19.200217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:08.299 [2024-12-07 02:47:19.200256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.299 [2024-12-07 02:47:19.200274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:08.299 [2024-12-07 02:47:19.200282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.299 [2024-12-07 02:47:19.200621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.299 [2024-12-07 02:47:19.200637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:08.299 [2024-12-07 02:47:19.200688] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:08.299 [2024-12-07 02:47:19.200704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:08.299 [2024-12-07 02:47:19.200793] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:08.299 [2024-12-07 02:47:19.200808] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:08.299 [2024-12-07 02:47:19.201035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:08.299 [2024-12-07 02:47:19.201456] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:08.299 [2024-12-07 02:47:19.201476] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:14:08.299 [2024-12-07 02:47:19.201569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.299 pt3 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.299 "name": "raid_bdev1", 00:14:08.299 "uuid": "f7e07858-7cdc-46a8-81f3-039cb3e34579", 00:14:08.299 "strip_size_kb": 64, 00:14:08.299 "state": "online", 00:14:08.299 "raid_level": "raid5f", 00:14:08.299 "superblock": true, 00:14:08.299 "num_base_bdevs": 3, 00:14:08.299 "num_base_bdevs_discovered": 3, 00:14:08.299 "num_base_bdevs_operational": 3, 00:14:08.299 "base_bdevs_list": [ 00:14:08.299 { 00:14:08.299 "name": "pt1", 00:14:08.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:08.299 "is_configured": true, 00:14:08.299 "data_offset": 2048, 00:14:08.299 "data_size": 63488 00:14:08.299 }, 00:14:08.299 { 00:14:08.299 "name": "pt2", 00:14:08.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:08.299 "is_configured": true, 00:14:08.299 "data_offset": 2048, 00:14:08.299 "data_size": 63488 00:14:08.299 }, 00:14:08.299 { 00:14:08.299 "name": "pt3", 00:14:08.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:08.299 "is_configured": true, 00:14:08.299 "data_offset": 2048, 00:14:08.299 "data_size": 63488 00:14:08.299 } 00:14:08.299 ] 00:14:08.299 }' 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.299 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.869 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:08.869 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:08.869 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:08.869 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:08.869 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:08.869 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:08.869 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:08.869 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:08.869 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.869 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.869 [2024-12-07 02:47:19.655648] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.869 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.869 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:08.869 "name": "raid_bdev1", 00:14:08.869 "aliases": [ 00:14:08.869 "f7e07858-7cdc-46a8-81f3-039cb3e34579" 00:14:08.869 ], 00:14:08.869 "product_name": "Raid Volume", 00:14:08.869 "block_size": 512, 00:14:08.869 "num_blocks": 126976, 00:14:08.869 "uuid": "f7e07858-7cdc-46a8-81f3-039cb3e34579", 00:14:08.869 "assigned_rate_limits": { 00:14:08.869 "rw_ios_per_sec": 0, 00:14:08.869 "rw_mbytes_per_sec": 0, 00:14:08.869 "r_mbytes_per_sec": 0, 00:14:08.869 "w_mbytes_per_sec": 0 00:14:08.869 }, 00:14:08.869 "claimed": false, 00:14:08.869 "zoned": false, 00:14:08.869 "supported_io_types": { 00:14:08.869 "read": true, 00:14:08.869 "write": true, 00:14:08.869 "unmap": false, 00:14:08.870 "flush": false, 00:14:08.870 "reset": true, 00:14:08.870 "nvme_admin": false, 00:14:08.870 "nvme_io": false, 00:14:08.870 "nvme_io_md": false, 00:14:08.870 "write_zeroes": true, 00:14:08.870 "zcopy": false, 00:14:08.870 "get_zone_info": false, 00:14:08.870 "zone_management": false, 00:14:08.870 "zone_append": false, 00:14:08.870 "compare": false, 00:14:08.870 "compare_and_write": false, 00:14:08.870 "abort": false, 00:14:08.870 "seek_hole": false, 00:14:08.870 "seek_data": false, 00:14:08.870 "copy": false, 00:14:08.870 "nvme_iov_md": false 00:14:08.870 }, 00:14:08.870 "driver_specific": { 00:14:08.870 "raid": { 00:14:08.870 "uuid": "f7e07858-7cdc-46a8-81f3-039cb3e34579", 00:14:08.870 "strip_size_kb": 64, 00:14:08.870 "state": "online", 00:14:08.870 "raid_level": "raid5f", 00:14:08.870 "superblock": true, 00:14:08.870 "num_base_bdevs": 3, 00:14:08.870 "num_base_bdevs_discovered": 3, 00:14:08.870 "num_base_bdevs_operational": 3, 00:14:08.870 "base_bdevs_list": [ 00:14:08.870 { 00:14:08.870 "name": "pt1", 00:14:08.870 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:08.870 "is_configured": true, 00:14:08.870 "data_offset": 2048, 00:14:08.870 "data_size": 63488 00:14:08.870 }, 00:14:08.870 { 00:14:08.870 "name": "pt2", 00:14:08.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:08.870 "is_configured": true, 00:14:08.870 "data_offset": 2048, 00:14:08.870 "data_size": 63488 00:14:08.870 }, 00:14:08.870 { 00:14:08.870 "name": "pt3", 00:14:08.870 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:08.870 "is_configured": true, 00:14:08.870 "data_offset": 2048, 00:14:08.870 "data_size": 63488 00:14:08.870 } 00:14:08.870 ] 00:14:08.870 } 00:14:08.870 } 00:14:08.870 }' 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:08.870 pt2 00:14:08.870 pt3' 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.870 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.870 [2024-12-07 02:47:19.943125] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:09.129 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.129 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f7e07858-7cdc-46a8-81f3-039cb3e34579 '!=' f7e07858-7cdc-46a8-81f3-039cb3e34579 ']' 00:14:09.129 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:09.129 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:09.129 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:09.129 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:09.129 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.129 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.129 [2024-12-07 02:47:19.986924] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.130 02:47:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.130 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.130 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.130 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.130 "name": "raid_bdev1", 00:14:09.130 "uuid": "f7e07858-7cdc-46a8-81f3-039cb3e34579", 00:14:09.130 "strip_size_kb": 64, 00:14:09.130 "state": "online", 00:14:09.130 "raid_level": "raid5f", 00:14:09.130 "superblock": true, 00:14:09.130 "num_base_bdevs": 3, 00:14:09.130 "num_base_bdevs_discovered": 2, 00:14:09.130 "num_base_bdevs_operational": 2, 00:14:09.130 "base_bdevs_list": [ 00:14:09.130 { 00:14:09.130 "name": null, 00:14:09.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.130 "is_configured": false, 00:14:09.130 "data_offset": 0, 00:14:09.130 "data_size": 63488 00:14:09.130 }, 00:14:09.130 { 00:14:09.130 "name": "pt2", 00:14:09.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.130 "is_configured": true, 00:14:09.130 "data_offset": 2048, 00:14:09.130 "data_size": 63488 00:14:09.130 }, 00:14:09.130 { 00:14:09.130 "name": "pt3", 00:14:09.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:09.130 "is_configured": true, 00:14:09.130 "data_offset": 2048, 00:14:09.130 "data_size": 63488 00:14:09.130 } 00:14:09.130 ] 00:14:09.130 }' 00:14:09.130 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.130 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.389 [2024-12-07 02:47:20.386180] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:09.389 [2024-12-07 02:47:20.386246] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.389 [2024-12-07 02:47:20.386326] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.389 [2024-12-07 02:47:20.386389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.389 [2024-12-07 02:47:20.386420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.389 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.649 [2024-12-07 02:47:20.470039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:09.649 [2024-12-07 02:47:20.470135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.649 [2024-12-07 02:47:20.470157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:09.649 [2024-12-07 02:47:20.470165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.649 [2024-12-07 02:47:20.472570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.649 [2024-12-07 02:47:20.472613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:09.649 [2024-12-07 02:47:20.472669] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:09.649 [2024-12-07 02:47:20.472705] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:09.649 pt2 00:14:09.649 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.649 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:09.649 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.649 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.649 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.649 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.649 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.649 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.649 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.649 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.650 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.650 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.650 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.650 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.650 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.650 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.650 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.650 "name": "raid_bdev1", 00:14:09.650 "uuid": "f7e07858-7cdc-46a8-81f3-039cb3e34579", 00:14:09.650 "strip_size_kb": 64, 00:14:09.650 "state": "configuring", 00:14:09.650 "raid_level": "raid5f", 00:14:09.650 "superblock": true, 00:14:09.650 "num_base_bdevs": 3, 00:14:09.650 "num_base_bdevs_discovered": 1, 00:14:09.650 "num_base_bdevs_operational": 2, 00:14:09.650 "base_bdevs_list": [ 00:14:09.650 { 00:14:09.650 "name": null, 00:14:09.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.650 "is_configured": false, 00:14:09.650 "data_offset": 2048, 00:14:09.650 "data_size": 63488 00:14:09.650 }, 00:14:09.650 { 00:14:09.650 "name": "pt2", 00:14:09.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.650 "is_configured": true, 00:14:09.650 "data_offset": 2048, 00:14:09.650 "data_size": 63488 00:14:09.650 }, 00:14:09.650 { 00:14:09.650 "name": null, 00:14:09.650 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:09.650 "is_configured": false, 00:14:09.650 "data_offset": 2048, 00:14:09.650 "data_size": 63488 00:14:09.650 } 00:14:09.650 ] 00:14:09.650 }' 00:14:09.650 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.650 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.909 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:09.909 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:09.909 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:09.909 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:09.909 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.909 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.909 [2024-12-07 02:47:20.937235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:09.909 [2024-12-07 02:47:20.937338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.909 [2024-12-07 02:47:20.937373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:09.909 [2024-12-07 02:47:20.937400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.909 [2024-12-07 02:47:20.937765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.909 [2024-12-07 02:47:20.937818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:09.909 [2024-12-07 02:47:20.937896] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:09.909 [2024-12-07 02:47:20.937949] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:09.910 [2024-12-07 02:47:20.938068] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:09.910 [2024-12-07 02:47:20.938104] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:09.910 [2024-12-07 02:47:20.938346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:09.910 [2024-12-07 02:47:20.938875] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:09.910 [2024-12-07 02:47:20.938928] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:14:09.910 [2024-12-07 02:47:20.939212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.910 pt3 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.910 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.169 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.169 "name": "raid_bdev1", 00:14:10.169 "uuid": "f7e07858-7cdc-46a8-81f3-039cb3e34579", 00:14:10.169 "strip_size_kb": 64, 00:14:10.169 "state": "online", 00:14:10.169 "raid_level": "raid5f", 00:14:10.169 "superblock": true, 00:14:10.169 "num_base_bdevs": 3, 00:14:10.169 "num_base_bdevs_discovered": 2, 00:14:10.169 "num_base_bdevs_operational": 2, 00:14:10.169 "base_bdevs_list": [ 00:14:10.169 { 00:14:10.169 "name": null, 00:14:10.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.169 "is_configured": false, 00:14:10.169 "data_offset": 2048, 00:14:10.169 "data_size": 63488 00:14:10.169 }, 00:14:10.169 { 00:14:10.169 "name": "pt2", 00:14:10.169 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:10.169 "is_configured": true, 00:14:10.169 "data_offset": 2048, 00:14:10.169 "data_size": 63488 00:14:10.169 }, 00:14:10.169 { 00:14:10.169 "name": "pt3", 00:14:10.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:10.169 "is_configured": true, 00:14:10.169 "data_offset": 2048, 00:14:10.169 "data_size": 63488 00:14:10.169 } 00:14:10.169 ] 00:14:10.169 }' 00:14:10.170 02:47:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.170 02:47:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.430 [2024-12-07 02:47:21.380459] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:10.430 [2024-12-07 02:47:21.380527] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.430 [2024-12-07 02:47:21.380625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.430 [2024-12-07 02:47:21.380692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.430 [2024-12-07 02:47:21.380781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.430 [2024-12-07 02:47:21.456330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:10.430 [2024-12-07 02:47:21.456417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.430 [2024-12-07 02:47:21.456446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:10.430 [2024-12-07 02:47:21.456474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.430 [2024-12-07 02:47:21.458826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.430 [2024-12-07 02:47:21.458890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:10.430 [2024-12-07 02:47:21.458964] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:10.430 [2024-12-07 02:47:21.459036] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:10.430 [2024-12-07 02:47:21.459147] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:10.430 [2024-12-07 02:47:21.459208] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:10.430 [2024-12-07 02:47:21.459248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:14:10.430 [2024-12-07 02:47:21.459319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:10.430 pt1 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.430 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.693 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.694 "name": "raid_bdev1", 00:14:10.694 "uuid": "f7e07858-7cdc-46a8-81f3-039cb3e34579", 00:14:10.694 "strip_size_kb": 64, 00:14:10.694 "state": "configuring", 00:14:10.694 "raid_level": "raid5f", 00:14:10.694 "superblock": true, 00:14:10.694 "num_base_bdevs": 3, 00:14:10.694 "num_base_bdevs_discovered": 1, 00:14:10.694 "num_base_bdevs_operational": 2, 00:14:10.694 "base_bdevs_list": [ 00:14:10.694 { 00:14:10.694 "name": null, 00:14:10.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.694 "is_configured": false, 00:14:10.694 "data_offset": 2048, 00:14:10.694 "data_size": 63488 00:14:10.694 }, 00:14:10.694 { 00:14:10.694 "name": "pt2", 00:14:10.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:10.694 "is_configured": true, 00:14:10.694 "data_offset": 2048, 00:14:10.694 "data_size": 63488 00:14:10.694 }, 00:14:10.694 { 00:14:10.694 "name": null, 00:14:10.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:10.694 "is_configured": false, 00:14:10.694 "data_offset": 2048, 00:14:10.694 "data_size": 63488 00:14:10.694 } 00:14:10.694 ] 00:14:10.694 }' 00:14:10.694 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.694 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.960 [2024-12-07 02:47:21.951492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:10.960 [2024-12-07 02:47:21.951537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.960 [2024-12-07 02:47:21.951550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:10.960 [2024-12-07 02:47:21.951560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.960 [2024-12-07 02:47:21.951919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.960 [2024-12-07 02:47:21.952014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:10.960 [2024-12-07 02:47:21.952073] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:10.960 [2024-12-07 02:47:21.952095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:10.960 [2024-12-07 02:47:21.952169] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:10.960 [2024-12-07 02:47:21.952181] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:10.960 [2024-12-07 02:47:21.952414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:10.960 [2024-12-07 02:47:21.952919] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:10.960 [2024-12-07 02:47:21.952938] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:10.960 [2024-12-07 02:47:21.953091] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.960 pt3 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.960 02:47:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.960 02:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.960 "name": "raid_bdev1", 00:14:10.960 "uuid": "f7e07858-7cdc-46a8-81f3-039cb3e34579", 00:14:10.960 "strip_size_kb": 64, 00:14:10.960 "state": "online", 00:14:10.960 "raid_level": "raid5f", 00:14:10.960 "superblock": true, 00:14:10.960 "num_base_bdevs": 3, 00:14:10.960 "num_base_bdevs_discovered": 2, 00:14:10.960 "num_base_bdevs_operational": 2, 00:14:10.960 "base_bdevs_list": [ 00:14:10.960 { 00:14:10.960 "name": null, 00:14:10.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.960 "is_configured": false, 00:14:10.960 "data_offset": 2048, 00:14:10.960 "data_size": 63488 00:14:10.960 }, 00:14:10.960 { 00:14:10.960 "name": "pt2", 00:14:10.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:10.960 "is_configured": true, 00:14:10.960 "data_offset": 2048, 00:14:10.960 "data_size": 63488 00:14:10.960 }, 00:14:10.960 { 00:14:10.960 "name": "pt3", 00:14:10.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:10.960 "is_configured": true, 00:14:10.960 "data_offset": 2048, 00:14:10.960 "data_size": 63488 00:14:10.960 } 00:14:10.960 ] 00:14:10.960 }' 00:14:10.960 02:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.960 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.528 [2024-12-07 02:47:22.446840] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f7e07858-7cdc-46a8-81f3-039cb3e34579 '!=' f7e07858-7cdc-46a8-81f3-039cb3e34579 ']' 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91924 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91924 ']' 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91924 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91924 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91924' 00:14:11.528 killing process with pid 91924 00:14:11.528 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91924 00:14:11.528 [2024-12-07 02:47:22.529161] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:11.528 [2024-12-07 02:47:22.529286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.528 [2024-12-07 02:47:22.529368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91924 00:14:11.528 ee all in destruct 00:14:11.528 [2024-12-07 02:47:22.529432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:11.528 [2024-12-07 02:47:22.588669] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:12.098 02:47:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:12.098 00:14:12.098 real 0m6.824s 00:14:12.098 user 0m11.230s 00:14:12.098 sys 0m1.468s 00:14:12.098 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.098 02:47:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.098 ************************************ 00:14:12.098 END TEST raid5f_superblock_test 00:14:12.098 ************************************ 00:14:12.098 02:47:23 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:12.098 02:47:23 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:12.098 02:47:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:12.098 02:47:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:12.098 02:47:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:12.098 ************************************ 00:14:12.098 START TEST raid5f_rebuild_test 00:14:12.098 ************************************ 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=92351 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 92351 00:14:12.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 92351 ']' 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.098 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.098 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:12.098 Zero copy mechanism will not be used. 00:14:12.098 [2024-12-07 02:47:23.144924] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:12.099 [2024-12-07 02:47:23.145038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92351 ] 00:14:12.358 [2024-12-07 02:47:23.305521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.358 [2024-12-07 02:47:23.375055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.618 [2024-12-07 02:47:23.450549] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:12.618 [2024-12-07 02:47:23.450616] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:13.187 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.187 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:13.187 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:13.187 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:13.187 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.187 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.187 BaseBdev1_malloc 00:14:13.187 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.187 02:47:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:13.187 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.187 02:47:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.187 [2024-12-07 02:47:23.999944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:13.187 [2024-12-07 02:47:24.000011] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.187 [2024-12-07 02:47:24.000045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:13.187 [2024-12-07 02:47:24.000063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.187 [2024-12-07 02:47:24.002397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.187 [2024-12-07 02:47:24.002502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:13.187 BaseBdev1 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.187 BaseBdev2_malloc 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.187 [2024-12-07 02:47:24.049637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:13.187 [2024-12-07 02:47:24.049891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.187 [2024-12-07 02:47:24.049958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:13.187 [2024-12-07 02:47:24.049984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.187 [2024-12-07 02:47:24.054157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.187 [2024-12-07 02:47:24.054213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:13.187 BaseBdev2 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.187 BaseBdev3_malloc 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.187 [2024-12-07 02:47:24.086117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:13.187 [2024-12-07 02:47:24.086163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.187 [2024-12-07 02:47:24.086189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:13.187 [2024-12-07 02:47:24.086198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.187 [2024-12-07 02:47:24.088470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.187 [2024-12-07 02:47:24.088503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:13.187 BaseBdev3 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.187 spare_malloc 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.187 spare_delay 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.187 [2024-12-07 02:47:24.132321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:13.187 [2024-12-07 02:47:24.132434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.187 [2024-12-07 02:47:24.132463] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:13.187 [2024-12-07 02:47:24.132471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.187 [2024-12-07 02:47:24.134813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.187 [2024-12-07 02:47:24.134844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:13.187 spare 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.187 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.187 [2024-12-07 02:47:24.144367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.187 [2024-12-07 02:47:24.146383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.187 [2024-12-07 02:47:24.146448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.187 [2024-12-07 02:47:24.146523] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:13.187 [2024-12-07 02:47:24.146533] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:13.187 [2024-12-07 02:47:24.146790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:13.188 [2024-12-07 02:47:24.147185] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:13.188 [2024-12-07 02:47:24.147195] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:13.188 [2024-12-07 02:47:24.147310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.188 "name": "raid_bdev1", 00:14:13.188 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:13.188 "strip_size_kb": 64, 00:14:13.188 "state": "online", 00:14:13.188 "raid_level": "raid5f", 00:14:13.188 "superblock": false, 00:14:13.188 "num_base_bdevs": 3, 00:14:13.188 "num_base_bdevs_discovered": 3, 00:14:13.188 "num_base_bdevs_operational": 3, 00:14:13.188 "base_bdevs_list": [ 00:14:13.188 { 00:14:13.188 "name": "BaseBdev1", 00:14:13.188 "uuid": "056ff03e-7cac-5371-a654-1d2c450a2065", 00:14:13.188 "is_configured": true, 00:14:13.188 "data_offset": 0, 00:14:13.188 "data_size": 65536 00:14:13.188 }, 00:14:13.188 { 00:14:13.188 "name": "BaseBdev2", 00:14:13.188 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:13.188 "is_configured": true, 00:14:13.188 "data_offset": 0, 00:14:13.188 "data_size": 65536 00:14:13.188 }, 00:14:13.188 { 00:14:13.188 "name": "BaseBdev3", 00:14:13.188 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:13.188 "is_configured": true, 00:14:13.188 "data_offset": 0, 00:14:13.188 "data_size": 65536 00:14:13.188 } 00:14:13.188 ] 00:14:13.188 }' 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.188 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.758 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:13.758 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:13.758 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.758 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.758 [2024-12-07 02:47:24.537083] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.758 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.758 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:13.758 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:13.758 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.758 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.758 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.758 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.758 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:13.759 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:13.759 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:13.759 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:13.759 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:13.759 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:13.759 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:13.759 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:13.759 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:13.759 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:13.759 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:13.759 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:13.759 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:13.759 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:13.759 [2024-12-07 02:47:24.800510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:13.759 /dev/nbd0 00:14:14.019 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:14.019 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:14.019 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:14.019 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:14.019 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:14.019 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:14.020 1+0 records in 00:14:14.020 1+0 records out 00:14:14.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524707 s, 7.8 MB/s 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:14.020 02:47:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:14.277 512+0 records in 00:14:14.277 512+0 records out 00:14:14.277 67108864 bytes (67 MB, 64 MiB) copied, 0.287479 s, 233 MB/s 00:14:14.277 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:14.277 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:14.277 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:14.277 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:14.277 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:14.277 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:14.277 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:14.536 [2024-12-07 02:47:25.412728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.536 [2024-12-07 02:47:25.434059] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.536 "name": "raid_bdev1", 00:14:14.536 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:14.536 "strip_size_kb": 64, 00:14:14.536 "state": "online", 00:14:14.536 "raid_level": "raid5f", 00:14:14.536 "superblock": false, 00:14:14.536 "num_base_bdevs": 3, 00:14:14.536 "num_base_bdevs_discovered": 2, 00:14:14.536 "num_base_bdevs_operational": 2, 00:14:14.536 "base_bdevs_list": [ 00:14:14.536 { 00:14:14.536 "name": null, 00:14:14.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.536 "is_configured": false, 00:14:14.536 "data_offset": 0, 00:14:14.536 "data_size": 65536 00:14:14.536 }, 00:14:14.536 { 00:14:14.536 "name": "BaseBdev2", 00:14:14.536 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:14.536 "is_configured": true, 00:14:14.536 "data_offset": 0, 00:14:14.536 "data_size": 65536 00:14:14.536 }, 00:14:14.536 { 00:14:14.536 "name": "BaseBdev3", 00:14:14.536 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:14.536 "is_configured": true, 00:14:14.536 "data_offset": 0, 00:14:14.536 "data_size": 65536 00:14:14.536 } 00:14:14.536 ] 00:14:14.536 }' 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.536 02:47:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.106 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:15.107 02:47:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.107 02:47:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.107 [2024-12-07 02:47:25.889289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.107 [2024-12-07 02:47:25.893220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:14:15.107 02:47:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.107 02:47:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:15.107 [2024-12-07 02:47:25.895353] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.047 "name": "raid_bdev1", 00:14:16.047 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:16.047 "strip_size_kb": 64, 00:14:16.047 "state": "online", 00:14:16.047 "raid_level": "raid5f", 00:14:16.047 "superblock": false, 00:14:16.047 "num_base_bdevs": 3, 00:14:16.047 "num_base_bdevs_discovered": 3, 00:14:16.047 "num_base_bdevs_operational": 3, 00:14:16.047 "process": { 00:14:16.047 "type": "rebuild", 00:14:16.047 "target": "spare", 00:14:16.047 "progress": { 00:14:16.047 "blocks": 20480, 00:14:16.047 "percent": 15 00:14:16.047 } 00:14:16.047 }, 00:14:16.047 "base_bdevs_list": [ 00:14:16.047 { 00:14:16.047 "name": "spare", 00:14:16.047 "uuid": "4201d3af-7510-5b4a-8fa0-6c49c80f44ed", 00:14:16.047 "is_configured": true, 00:14:16.047 "data_offset": 0, 00:14:16.047 "data_size": 65536 00:14:16.047 }, 00:14:16.047 { 00:14:16.047 "name": "BaseBdev2", 00:14:16.047 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:16.047 "is_configured": true, 00:14:16.047 "data_offset": 0, 00:14:16.047 "data_size": 65536 00:14:16.047 }, 00:14:16.047 { 00:14:16.047 "name": "BaseBdev3", 00:14:16.047 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:16.047 "is_configured": true, 00:14:16.047 "data_offset": 0, 00:14:16.047 "data_size": 65536 00:14:16.047 } 00:14:16.047 ] 00:14:16.047 }' 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.047 02:47:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.047 [2024-12-07 02:47:27.056005] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.047 [2024-12-07 02:47:27.102073] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:16.047 [2024-12-07 02:47:27.102171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.047 [2024-12-07 02:47:27.102187] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.047 [2024-12-07 02:47:27.102196] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.047 02:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.307 02:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.307 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.307 "name": "raid_bdev1", 00:14:16.307 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:16.307 "strip_size_kb": 64, 00:14:16.307 "state": "online", 00:14:16.307 "raid_level": "raid5f", 00:14:16.307 "superblock": false, 00:14:16.307 "num_base_bdevs": 3, 00:14:16.307 "num_base_bdevs_discovered": 2, 00:14:16.307 "num_base_bdevs_operational": 2, 00:14:16.307 "base_bdevs_list": [ 00:14:16.307 { 00:14:16.307 "name": null, 00:14:16.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.307 "is_configured": false, 00:14:16.307 "data_offset": 0, 00:14:16.307 "data_size": 65536 00:14:16.307 }, 00:14:16.307 { 00:14:16.307 "name": "BaseBdev2", 00:14:16.307 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:16.307 "is_configured": true, 00:14:16.307 "data_offset": 0, 00:14:16.307 "data_size": 65536 00:14:16.307 }, 00:14:16.307 { 00:14:16.307 "name": "BaseBdev3", 00:14:16.307 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:16.307 "is_configured": true, 00:14:16.307 "data_offset": 0, 00:14:16.307 "data_size": 65536 00:14:16.307 } 00:14:16.307 ] 00:14:16.307 }' 00:14:16.307 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.307 02:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.567 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.567 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.567 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.567 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.567 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.567 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.567 02:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.567 02:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.567 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.567 02:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.567 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.567 "name": "raid_bdev1", 00:14:16.567 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:16.567 "strip_size_kb": 64, 00:14:16.567 "state": "online", 00:14:16.567 "raid_level": "raid5f", 00:14:16.567 "superblock": false, 00:14:16.567 "num_base_bdevs": 3, 00:14:16.567 "num_base_bdevs_discovered": 2, 00:14:16.567 "num_base_bdevs_operational": 2, 00:14:16.567 "base_bdevs_list": [ 00:14:16.567 { 00:14:16.567 "name": null, 00:14:16.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.567 "is_configured": false, 00:14:16.567 "data_offset": 0, 00:14:16.567 "data_size": 65536 00:14:16.567 }, 00:14:16.567 { 00:14:16.567 "name": "BaseBdev2", 00:14:16.567 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:16.567 "is_configured": true, 00:14:16.567 "data_offset": 0, 00:14:16.567 "data_size": 65536 00:14:16.567 }, 00:14:16.567 { 00:14:16.567 "name": "BaseBdev3", 00:14:16.567 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:16.567 "is_configured": true, 00:14:16.567 "data_offset": 0, 00:14:16.567 "data_size": 65536 00:14:16.567 } 00:14:16.567 ] 00:14:16.567 }' 00:14:16.567 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.828 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.828 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.828 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.828 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:16.828 02:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.828 02:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.828 [2024-12-07 02:47:27.738319] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:16.828 [2024-12-07 02:47:27.741879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:14:16.828 [2024-12-07 02:47:27.743984] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.828 02:47:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.828 02:47:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:17.765 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.765 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.765 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.765 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.765 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.765 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.765 02:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.765 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.765 02:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.765 02:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.765 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.765 "name": "raid_bdev1", 00:14:17.765 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:17.765 "strip_size_kb": 64, 00:14:17.765 "state": "online", 00:14:17.765 "raid_level": "raid5f", 00:14:17.765 "superblock": false, 00:14:17.765 "num_base_bdevs": 3, 00:14:17.765 "num_base_bdevs_discovered": 3, 00:14:17.765 "num_base_bdevs_operational": 3, 00:14:17.765 "process": { 00:14:17.765 "type": "rebuild", 00:14:17.765 "target": "spare", 00:14:17.765 "progress": { 00:14:17.765 "blocks": 20480, 00:14:17.765 "percent": 15 00:14:17.765 } 00:14:17.765 }, 00:14:17.765 "base_bdevs_list": [ 00:14:17.765 { 00:14:17.765 "name": "spare", 00:14:17.765 "uuid": "4201d3af-7510-5b4a-8fa0-6c49c80f44ed", 00:14:17.765 "is_configured": true, 00:14:17.765 "data_offset": 0, 00:14:17.765 "data_size": 65536 00:14:17.765 }, 00:14:17.765 { 00:14:17.765 "name": "BaseBdev2", 00:14:17.765 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:17.765 "is_configured": true, 00:14:17.765 "data_offset": 0, 00:14:17.765 "data_size": 65536 00:14:17.765 }, 00:14:17.765 { 00:14:17.765 "name": "BaseBdev3", 00:14:17.765 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:17.765 "is_configured": true, 00:14:17.765 "data_offset": 0, 00:14:17.765 "data_size": 65536 00:14:17.765 } 00:14:17.765 ] 00:14:17.765 }' 00:14:17.765 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=461 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.024 "name": "raid_bdev1", 00:14:18.024 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:18.024 "strip_size_kb": 64, 00:14:18.024 "state": "online", 00:14:18.024 "raid_level": "raid5f", 00:14:18.024 "superblock": false, 00:14:18.024 "num_base_bdevs": 3, 00:14:18.024 "num_base_bdevs_discovered": 3, 00:14:18.024 "num_base_bdevs_operational": 3, 00:14:18.024 "process": { 00:14:18.024 "type": "rebuild", 00:14:18.024 "target": "spare", 00:14:18.024 "progress": { 00:14:18.024 "blocks": 22528, 00:14:18.024 "percent": 17 00:14:18.024 } 00:14:18.024 }, 00:14:18.024 "base_bdevs_list": [ 00:14:18.024 { 00:14:18.024 "name": "spare", 00:14:18.024 "uuid": "4201d3af-7510-5b4a-8fa0-6c49c80f44ed", 00:14:18.024 "is_configured": true, 00:14:18.024 "data_offset": 0, 00:14:18.024 "data_size": 65536 00:14:18.024 }, 00:14:18.024 { 00:14:18.024 "name": "BaseBdev2", 00:14:18.024 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:18.024 "is_configured": true, 00:14:18.024 "data_offset": 0, 00:14:18.024 "data_size": 65536 00:14:18.024 }, 00:14:18.024 { 00:14:18.024 "name": "BaseBdev3", 00:14:18.024 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:18.024 "is_configured": true, 00:14:18.024 "data_offset": 0, 00:14:18.024 "data_size": 65536 00:14:18.024 } 00:14:18.024 ] 00:14:18.024 }' 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.024 02:47:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.025 02:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.025 02:47:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.963 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.963 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.963 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.963 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.963 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.963 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.963 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.963 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.963 02:47:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.963 02:47:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.222 02:47:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.222 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.222 "name": "raid_bdev1", 00:14:19.222 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:19.222 "strip_size_kb": 64, 00:14:19.222 "state": "online", 00:14:19.222 "raid_level": "raid5f", 00:14:19.223 "superblock": false, 00:14:19.223 "num_base_bdevs": 3, 00:14:19.223 "num_base_bdevs_discovered": 3, 00:14:19.223 "num_base_bdevs_operational": 3, 00:14:19.223 "process": { 00:14:19.223 "type": "rebuild", 00:14:19.223 "target": "spare", 00:14:19.223 "progress": { 00:14:19.223 "blocks": 45056, 00:14:19.223 "percent": 34 00:14:19.223 } 00:14:19.223 }, 00:14:19.223 "base_bdevs_list": [ 00:14:19.223 { 00:14:19.223 "name": "spare", 00:14:19.223 "uuid": "4201d3af-7510-5b4a-8fa0-6c49c80f44ed", 00:14:19.223 "is_configured": true, 00:14:19.223 "data_offset": 0, 00:14:19.223 "data_size": 65536 00:14:19.223 }, 00:14:19.223 { 00:14:19.223 "name": "BaseBdev2", 00:14:19.223 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:19.223 "is_configured": true, 00:14:19.223 "data_offset": 0, 00:14:19.223 "data_size": 65536 00:14:19.223 }, 00:14:19.223 { 00:14:19.223 "name": "BaseBdev3", 00:14:19.223 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:19.223 "is_configured": true, 00:14:19.223 "data_offset": 0, 00:14:19.223 "data_size": 65536 00:14:19.223 } 00:14:19.223 ] 00:14:19.223 }' 00:14:19.223 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.223 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:19.223 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.223 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:19.223 02:47:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:20.160 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:20.160 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.160 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.160 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.160 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.160 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.160 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.160 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.160 02:47:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.160 02:47:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.160 02:47:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.160 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.160 "name": "raid_bdev1", 00:14:20.160 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:20.160 "strip_size_kb": 64, 00:14:20.160 "state": "online", 00:14:20.160 "raid_level": "raid5f", 00:14:20.160 "superblock": false, 00:14:20.160 "num_base_bdevs": 3, 00:14:20.160 "num_base_bdevs_discovered": 3, 00:14:20.160 "num_base_bdevs_operational": 3, 00:14:20.160 "process": { 00:14:20.160 "type": "rebuild", 00:14:20.160 "target": "spare", 00:14:20.160 "progress": { 00:14:20.160 "blocks": 69632, 00:14:20.160 "percent": 53 00:14:20.160 } 00:14:20.160 }, 00:14:20.160 "base_bdevs_list": [ 00:14:20.160 { 00:14:20.160 "name": "spare", 00:14:20.160 "uuid": "4201d3af-7510-5b4a-8fa0-6c49c80f44ed", 00:14:20.160 "is_configured": true, 00:14:20.160 "data_offset": 0, 00:14:20.160 "data_size": 65536 00:14:20.160 }, 00:14:20.160 { 00:14:20.160 "name": "BaseBdev2", 00:14:20.160 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:20.160 "is_configured": true, 00:14:20.160 "data_offset": 0, 00:14:20.160 "data_size": 65536 00:14:20.160 }, 00:14:20.160 { 00:14:20.160 "name": "BaseBdev3", 00:14:20.160 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:20.160 "is_configured": true, 00:14:20.160 "data_offset": 0, 00:14:20.160 "data_size": 65536 00:14:20.160 } 00:14:20.160 ] 00:14:20.160 }' 00:14:20.420 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.420 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.420 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.420 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.420 02:47:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:21.358 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.359 "name": "raid_bdev1", 00:14:21.359 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:21.359 "strip_size_kb": 64, 00:14:21.359 "state": "online", 00:14:21.359 "raid_level": "raid5f", 00:14:21.359 "superblock": false, 00:14:21.359 "num_base_bdevs": 3, 00:14:21.359 "num_base_bdevs_discovered": 3, 00:14:21.359 "num_base_bdevs_operational": 3, 00:14:21.359 "process": { 00:14:21.359 "type": "rebuild", 00:14:21.359 "target": "spare", 00:14:21.359 "progress": { 00:14:21.359 "blocks": 92160, 00:14:21.359 "percent": 70 00:14:21.359 } 00:14:21.359 }, 00:14:21.359 "base_bdevs_list": [ 00:14:21.359 { 00:14:21.359 "name": "spare", 00:14:21.359 "uuid": "4201d3af-7510-5b4a-8fa0-6c49c80f44ed", 00:14:21.359 "is_configured": true, 00:14:21.359 "data_offset": 0, 00:14:21.359 "data_size": 65536 00:14:21.359 }, 00:14:21.359 { 00:14:21.359 "name": "BaseBdev2", 00:14:21.359 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:21.359 "is_configured": true, 00:14:21.359 "data_offset": 0, 00:14:21.359 "data_size": 65536 00:14:21.359 }, 00:14:21.359 { 00:14:21.359 "name": "BaseBdev3", 00:14:21.359 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:21.359 "is_configured": true, 00:14:21.359 "data_offset": 0, 00:14:21.359 "data_size": 65536 00:14:21.359 } 00:14:21.359 ] 00:14:21.359 }' 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:21.359 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.617 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:21.617 02:47:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.556 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.556 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.556 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.556 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.556 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.556 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.556 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.556 02:47:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.556 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.556 02:47:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.556 02:47:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.556 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.556 "name": "raid_bdev1", 00:14:22.556 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:22.556 "strip_size_kb": 64, 00:14:22.556 "state": "online", 00:14:22.556 "raid_level": "raid5f", 00:14:22.556 "superblock": false, 00:14:22.556 "num_base_bdevs": 3, 00:14:22.557 "num_base_bdevs_discovered": 3, 00:14:22.557 "num_base_bdevs_operational": 3, 00:14:22.557 "process": { 00:14:22.557 "type": "rebuild", 00:14:22.557 "target": "spare", 00:14:22.557 "progress": { 00:14:22.557 "blocks": 116736, 00:14:22.557 "percent": 89 00:14:22.557 } 00:14:22.557 }, 00:14:22.557 "base_bdevs_list": [ 00:14:22.557 { 00:14:22.557 "name": "spare", 00:14:22.557 "uuid": "4201d3af-7510-5b4a-8fa0-6c49c80f44ed", 00:14:22.557 "is_configured": true, 00:14:22.557 "data_offset": 0, 00:14:22.557 "data_size": 65536 00:14:22.557 }, 00:14:22.557 { 00:14:22.557 "name": "BaseBdev2", 00:14:22.557 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:22.557 "is_configured": true, 00:14:22.557 "data_offset": 0, 00:14:22.557 "data_size": 65536 00:14:22.557 }, 00:14:22.557 { 00:14:22.557 "name": "BaseBdev3", 00:14:22.557 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:22.557 "is_configured": true, 00:14:22.557 "data_offset": 0, 00:14:22.557 "data_size": 65536 00:14:22.557 } 00:14:22.557 ] 00:14:22.557 }' 00:14:22.557 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.557 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.557 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.557 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.557 02:47:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.126 [2024-12-07 02:47:34.176709] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:23.126 [2024-12-07 02:47:34.176813] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:23.126 [2024-12-07 02:47:34.176883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.696 "name": "raid_bdev1", 00:14:23.696 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:23.696 "strip_size_kb": 64, 00:14:23.696 "state": "online", 00:14:23.696 "raid_level": "raid5f", 00:14:23.696 "superblock": false, 00:14:23.696 "num_base_bdevs": 3, 00:14:23.696 "num_base_bdevs_discovered": 3, 00:14:23.696 "num_base_bdevs_operational": 3, 00:14:23.696 "base_bdevs_list": [ 00:14:23.696 { 00:14:23.696 "name": "spare", 00:14:23.696 "uuid": "4201d3af-7510-5b4a-8fa0-6c49c80f44ed", 00:14:23.696 "is_configured": true, 00:14:23.696 "data_offset": 0, 00:14:23.696 "data_size": 65536 00:14:23.696 }, 00:14:23.696 { 00:14:23.696 "name": "BaseBdev2", 00:14:23.696 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:23.696 "is_configured": true, 00:14:23.696 "data_offset": 0, 00:14:23.696 "data_size": 65536 00:14:23.696 }, 00:14:23.696 { 00:14:23.696 "name": "BaseBdev3", 00:14:23.696 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:23.696 "is_configured": true, 00:14:23.696 "data_offset": 0, 00:14:23.696 "data_size": 65536 00:14:23.696 } 00:14:23.696 ] 00:14:23.696 }' 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:23.696 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.956 "name": "raid_bdev1", 00:14:23.956 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:23.956 "strip_size_kb": 64, 00:14:23.956 "state": "online", 00:14:23.956 "raid_level": "raid5f", 00:14:23.956 "superblock": false, 00:14:23.956 "num_base_bdevs": 3, 00:14:23.956 "num_base_bdevs_discovered": 3, 00:14:23.956 "num_base_bdevs_operational": 3, 00:14:23.956 "base_bdevs_list": [ 00:14:23.956 { 00:14:23.956 "name": "spare", 00:14:23.956 "uuid": "4201d3af-7510-5b4a-8fa0-6c49c80f44ed", 00:14:23.956 "is_configured": true, 00:14:23.956 "data_offset": 0, 00:14:23.956 "data_size": 65536 00:14:23.956 }, 00:14:23.956 { 00:14:23.956 "name": "BaseBdev2", 00:14:23.956 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:23.956 "is_configured": true, 00:14:23.956 "data_offset": 0, 00:14:23.956 "data_size": 65536 00:14:23.956 }, 00:14:23.956 { 00:14:23.956 "name": "BaseBdev3", 00:14:23.956 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:23.956 "is_configured": true, 00:14:23.956 "data_offset": 0, 00:14:23.956 "data_size": 65536 00:14:23.956 } 00:14:23.956 ] 00:14:23.956 }' 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.956 "name": "raid_bdev1", 00:14:23.956 "uuid": "9e7a35a9-3b30-45d6-a70a-6ef1cb4fae80", 00:14:23.956 "strip_size_kb": 64, 00:14:23.956 "state": "online", 00:14:23.956 "raid_level": "raid5f", 00:14:23.956 "superblock": false, 00:14:23.956 "num_base_bdevs": 3, 00:14:23.956 "num_base_bdevs_discovered": 3, 00:14:23.956 "num_base_bdevs_operational": 3, 00:14:23.956 "base_bdevs_list": [ 00:14:23.956 { 00:14:23.956 "name": "spare", 00:14:23.956 "uuid": "4201d3af-7510-5b4a-8fa0-6c49c80f44ed", 00:14:23.956 "is_configured": true, 00:14:23.956 "data_offset": 0, 00:14:23.956 "data_size": 65536 00:14:23.956 }, 00:14:23.956 { 00:14:23.956 "name": "BaseBdev2", 00:14:23.956 "uuid": "2a85b814-368b-5386-8909-735f02d09de2", 00:14:23.956 "is_configured": true, 00:14:23.956 "data_offset": 0, 00:14:23.956 "data_size": 65536 00:14:23.956 }, 00:14:23.956 { 00:14:23.956 "name": "BaseBdev3", 00:14:23.956 "uuid": "186d20bf-55d0-5406-b017-daa597072f6a", 00:14:23.956 "is_configured": true, 00:14:23.956 "data_offset": 0, 00:14:23.956 "data_size": 65536 00:14:23.956 } 00:14:23.956 ] 00:14:23.956 }' 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.956 02:47:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.524 [2024-12-07 02:47:35.403806] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.524 [2024-12-07 02:47:35.403876] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.524 [2024-12-07 02:47:35.403993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.524 [2024-12-07 02:47:35.404112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.524 [2024-12-07 02:47:35.404164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.524 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:24.782 /dev/nbd0 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:24.782 1+0 records in 00:14:24.782 1+0 records out 00:14:24.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283562 s, 14.4 MB/s 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:24.782 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:24.783 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:24.783 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:24.783 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:24.783 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:24.783 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:25.041 /dev/nbd1 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.041 1+0 records in 00:14:25.041 1+0 records out 00:14:25.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347223 s, 11.8 MB/s 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:25.041 02:47:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:25.041 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:25.041 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.041 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:25.041 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:25.042 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:25.042 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.042 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:25.300 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:25.300 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:25.300 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:25.300 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.300 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.300 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:25.300 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:25.300 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.300 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.300 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 92351 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 92351 ']' 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 92351 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92351 00:14:25.559 killing process with pid 92351 00:14:25.559 Received shutdown signal, test time was about 60.000000 seconds 00:14:25.559 00:14:25.559 Latency(us) 00:14:25.559 [2024-12-07T02:47:36.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.559 [2024-12-07T02:47:36.637Z] =================================================================================================================== 00:14:25.559 [2024-12-07T02:47:36.637Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92351' 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 92351 00:14:25.559 [2024-12-07 02:47:36.509412] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.559 02:47:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 92351 00:14:25.559 [2024-12-07 02:47:36.584264] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:26.126 ************************************ 00:14:26.126 END TEST raid5f_rebuild_test 00:14:26.126 ************************************ 00:14:26.126 02:47:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:26.126 00:14:26.126 real 0m13.894s 00:14:26.126 user 0m17.297s 00:14:26.126 sys 0m2.084s 00:14:26.126 02:47:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:26.126 02:47:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.126 02:47:37 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:26.126 02:47:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:26.126 02:47:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:26.126 02:47:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:26.126 ************************************ 00:14:26.126 START TEST raid5f_rebuild_test_sb 00:14:26.126 ************************************ 00:14:26.126 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:14:26.126 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:26.126 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:26.126 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:26.126 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:26.126 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:26.126 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:26.126 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.126 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:26.126 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:26.126 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.126 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92780 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92780 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92780 ']' 00:14:26.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.127 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.127 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:26.127 Zero copy mechanism will not be used. 00:14:26.127 [2024-12-07 02:47:37.123592] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:26.127 [2024-12-07 02:47:37.123757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92780 ] 00:14:26.387 [2024-12-07 02:47:37.283391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.387 [2024-12-07 02:47:37.354123] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.387 [2024-12-07 02:47:37.431294] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.387 [2024-12-07 02:47:37.431416] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.956 BaseBdev1_malloc 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.956 [2024-12-07 02:47:37.958334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:26.956 [2024-12-07 02:47:37.958489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.956 [2024-12-07 02:47:37.958543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:26.956 [2024-12-07 02:47:37.958599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.956 [2024-12-07 02:47:37.961023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.956 [2024-12-07 02:47:37.961099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:26.956 BaseBdev1 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.956 BaseBdev2_malloc 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.956 02:47:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.956 [2024-12-07 02:47:38.005085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:26.956 [2024-12-07 02:47:38.005287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.956 [2024-12-07 02:47:38.005371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:26.956 [2024-12-07 02:47:38.005468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.956 [2024-12-07 02:47:38.009860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.956 [2024-12-07 02:47:38.010005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:26.956 BaseBdev2 00:14:26.956 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.956 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:26.956 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:26.956 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.956 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.217 BaseBdev3_malloc 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.217 [2024-12-07 02:47:38.042079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:27.217 [2024-12-07 02:47:38.042124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.217 [2024-12-07 02:47:38.042147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:27.217 [2024-12-07 02:47:38.042156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.217 [2024-12-07 02:47:38.044517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.217 [2024-12-07 02:47:38.044551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:27.217 BaseBdev3 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.217 spare_malloc 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.217 spare_delay 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.217 [2024-12-07 02:47:38.088572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:27.217 [2024-12-07 02:47:38.088627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:27.217 [2024-12-07 02:47:38.088652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:27.217 [2024-12-07 02:47:38.088660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:27.217 [2024-12-07 02:47:38.090934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:27.217 [2024-12-07 02:47:38.090966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:27.217 spare 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.217 [2024-12-07 02:47:38.100639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.217 [2024-12-07 02:47:38.102609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.217 [2024-12-07 02:47:38.102674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.217 [2024-12-07 02:47:38.102824] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:14:27.217 [2024-12-07 02:47:38.102837] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:27.217 [2024-12-07 02:47:38.103094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:27.217 [2024-12-07 02:47:38.103509] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:14:27.217 [2024-12-07 02:47:38.103521] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:14:27.217 [2024-12-07 02:47:38.103662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.217 "name": "raid_bdev1", 00:14:27.217 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:27.217 "strip_size_kb": 64, 00:14:27.217 "state": "online", 00:14:27.217 "raid_level": "raid5f", 00:14:27.217 "superblock": true, 00:14:27.217 "num_base_bdevs": 3, 00:14:27.217 "num_base_bdevs_discovered": 3, 00:14:27.217 "num_base_bdevs_operational": 3, 00:14:27.217 "base_bdevs_list": [ 00:14:27.217 { 00:14:27.217 "name": "BaseBdev1", 00:14:27.217 "uuid": "e1aa7efb-ec9a-5db0-b8a1-1221b51ad96f", 00:14:27.217 "is_configured": true, 00:14:27.217 "data_offset": 2048, 00:14:27.217 "data_size": 63488 00:14:27.217 }, 00:14:27.217 { 00:14:27.217 "name": "BaseBdev2", 00:14:27.217 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:27.217 "is_configured": true, 00:14:27.217 "data_offset": 2048, 00:14:27.217 "data_size": 63488 00:14:27.217 }, 00:14:27.217 { 00:14:27.217 "name": "BaseBdev3", 00:14:27.217 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:27.217 "is_configured": true, 00:14:27.217 "data_offset": 2048, 00:14:27.217 "data_size": 63488 00:14:27.217 } 00:14:27.217 ] 00:14:27.217 }' 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.217 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.477 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:27.477 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:27.477 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.477 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.736 [2024-12-07 02:47:38.557377] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.736 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.736 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:27.736 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:27.736 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.736 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.736 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.736 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.736 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:27.736 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:27.736 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:27.736 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:27.737 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:27.737 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:27.737 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:27.737 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:27.737 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:27.737 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:27.737 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:27.737 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:27.737 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:27.737 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:27.996 [2024-12-07 02:47:38.816780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:27.996 /dev/nbd0 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.996 1+0 records in 00:14:27.996 1+0 records out 00:14:27.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391282 s, 10.5 MB/s 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.996 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:27.997 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:27.997 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:27.997 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:27.997 02:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:28.256 496+0 records in 00:14:28.256 496+0 records out 00:14:28.256 65011712 bytes (65 MB, 62 MiB) copied, 0.307355 s, 212 MB/s 00:14:28.256 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:28.256 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.256 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:28.256 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:28.256 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:28.256 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.256 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:28.516 [2024-12-07 02:47:39.405660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.516 [2024-12-07 02:47:39.432005] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.516 "name": "raid_bdev1", 00:14:28.516 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:28.516 "strip_size_kb": 64, 00:14:28.516 "state": "online", 00:14:28.516 "raid_level": "raid5f", 00:14:28.516 "superblock": true, 00:14:28.516 "num_base_bdevs": 3, 00:14:28.516 "num_base_bdevs_discovered": 2, 00:14:28.516 "num_base_bdevs_operational": 2, 00:14:28.516 "base_bdevs_list": [ 00:14:28.516 { 00:14:28.516 "name": null, 00:14:28.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.516 "is_configured": false, 00:14:28.516 "data_offset": 0, 00:14:28.516 "data_size": 63488 00:14:28.516 }, 00:14:28.516 { 00:14:28.516 "name": "BaseBdev2", 00:14:28.516 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:28.516 "is_configured": true, 00:14:28.516 "data_offset": 2048, 00:14:28.516 "data_size": 63488 00:14:28.516 }, 00:14:28.516 { 00:14:28.516 "name": "BaseBdev3", 00:14:28.516 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:28.516 "is_configured": true, 00:14:28.516 "data_offset": 2048, 00:14:28.516 "data_size": 63488 00:14:28.516 } 00:14:28.516 ] 00:14:28.516 }' 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.516 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.085 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:29.085 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.085 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.085 [2024-12-07 02:47:39.887201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:29.085 [2024-12-07 02:47:39.893791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028de0 00:14:29.085 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.085 02:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:29.085 [2024-12-07 02:47:39.896211] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:30.023 02:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.024 02:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.024 02:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.024 02:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.024 02:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.024 02:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.024 02:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.024 02:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.024 02:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.024 02:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.024 02:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.024 "name": "raid_bdev1", 00:14:30.024 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:30.024 "strip_size_kb": 64, 00:14:30.024 "state": "online", 00:14:30.024 "raid_level": "raid5f", 00:14:30.024 "superblock": true, 00:14:30.024 "num_base_bdevs": 3, 00:14:30.024 "num_base_bdevs_discovered": 3, 00:14:30.024 "num_base_bdevs_operational": 3, 00:14:30.024 "process": { 00:14:30.024 "type": "rebuild", 00:14:30.024 "target": "spare", 00:14:30.024 "progress": { 00:14:30.024 "blocks": 20480, 00:14:30.024 "percent": 16 00:14:30.024 } 00:14:30.024 }, 00:14:30.024 "base_bdevs_list": [ 00:14:30.024 { 00:14:30.024 "name": "spare", 00:14:30.024 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:30.024 "is_configured": true, 00:14:30.024 "data_offset": 2048, 00:14:30.024 "data_size": 63488 00:14:30.024 }, 00:14:30.024 { 00:14:30.024 "name": "BaseBdev2", 00:14:30.024 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:30.024 "is_configured": true, 00:14:30.024 "data_offset": 2048, 00:14:30.024 "data_size": 63488 00:14:30.024 }, 00:14:30.024 { 00:14:30.024 "name": "BaseBdev3", 00:14:30.024 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:30.024 "is_configured": true, 00:14:30.024 "data_offset": 2048, 00:14:30.024 "data_size": 63488 00:14:30.024 } 00:14:30.024 ] 00:14:30.024 }' 00:14:30.024 02:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.024 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.024 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.024 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.024 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:30.024 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.024 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.024 [2024-12-07 02:47:41.035778] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:30.284 [2024-12-07 02:47:41.104483] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:30.284 [2024-12-07 02:47:41.104559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.284 [2024-12-07 02:47:41.104578] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:30.284 [2024-12-07 02:47:41.104608] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.284 "name": "raid_bdev1", 00:14:30.284 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:30.284 "strip_size_kb": 64, 00:14:30.284 "state": "online", 00:14:30.284 "raid_level": "raid5f", 00:14:30.284 "superblock": true, 00:14:30.284 "num_base_bdevs": 3, 00:14:30.284 "num_base_bdevs_discovered": 2, 00:14:30.284 "num_base_bdevs_operational": 2, 00:14:30.284 "base_bdevs_list": [ 00:14:30.284 { 00:14:30.284 "name": null, 00:14:30.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.284 "is_configured": false, 00:14:30.284 "data_offset": 0, 00:14:30.284 "data_size": 63488 00:14:30.284 }, 00:14:30.284 { 00:14:30.284 "name": "BaseBdev2", 00:14:30.284 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:30.284 "is_configured": true, 00:14:30.284 "data_offset": 2048, 00:14:30.284 "data_size": 63488 00:14:30.284 }, 00:14:30.284 { 00:14:30.284 "name": "BaseBdev3", 00:14:30.284 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:30.284 "is_configured": true, 00:14:30.284 "data_offset": 2048, 00:14:30.284 "data_size": 63488 00:14:30.284 } 00:14:30.284 ] 00:14:30.284 }' 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.284 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.544 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.544 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.544 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.544 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.544 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.544 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.544 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.544 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.544 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.544 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.810 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.810 "name": "raid_bdev1", 00:14:30.810 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:30.810 "strip_size_kb": 64, 00:14:30.810 "state": "online", 00:14:30.810 "raid_level": "raid5f", 00:14:30.810 "superblock": true, 00:14:30.810 "num_base_bdevs": 3, 00:14:30.810 "num_base_bdevs_discovered": 2, 00:14:30.810 "num_base_bdevs_operational": 2, 00:14:30.810 "base_bdevs_list": [ 00:14:30.810 { 00:14:30.810 "name": null, 00:14:30.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.810 "is_configured": false, 00:14:30.810 "data_offset": 0, 00:14:30.810 "data_size": 63488 00:14:30.810 }, 00:14:30.810 { 00:14:30.810 "name": "BaseBdev2", 00:14:30.810 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:30.810 "is_configured": true, 00:14:30.810 "data_offset": 2048, 00:14:30.810 "data_size": 63488 00:14:30.810 }, 00:14:30.810 { 00:14:30.810 "name": "BaseBdev3", 00:14:30.810 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:30.810 "is_configured": true, 00:14:30.810 "data_offset": 2048, 00:14:30.810 "data_size": 63488 00:14:30.810 } 00:14:30.810 ] 00:14:30.810 }' 00:14:30.810 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.810 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.810 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.810 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.810 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:30.810 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.810 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.810 [2024-12-07 02:47:41.728614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:30.810 [2024-12-07 02:47:41.733025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028eb0 00:14:30.810 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.810 02:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:30.810 [2024-12-07 02:47:41.735325] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:31.806 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.806 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.806 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.806 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.806 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.806 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.806 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.806 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.806 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.806 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.806 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.806 "name": "raid_bdev1", 00:14:31.806 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:31.806 "strip_size_kb": 64, 00:14:31.806 "state": "online", 00:14:31.806 "raid_level": "raid5f", 00:14:31.806 "superblock": true, 00:14:31.806 "num_base_bdevs": 3, 00:14:31.806 "num_base_bdevs_discovered": 3, 00:14:31.806 "num_base_bdevs_operational": 3, 00:14:31.806 "process": { 00:14:31.806 "type": "rebuild", 00:14:31.806 "target": "spare", 00:14:31.806 "progress": { 00:14:31.806 "blocks": 20480, 00:14:31.806 "percent": 16 00:14:31.806 } 00:14:31.806 }, 00:14:31.806 "base_bdevs_list": [ 00:14:31.806 { 00:14:31.806 "name": "spare", 00:14:31.806 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:31.806 "is_configured": true, 00:14:31.806 "data_offset": 2048, 00:14:31.806 "data_size": 63488 00:14:31.806 }, 00:14:31.806 { 00:14:31.806 "name": "BaseBdev2", 00:14:31.806 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:31.806 "is_configured": true, 00:14:31.806 "data_offset": 2048, 00:14:31.806 "data_size": 63488 00:14:31.806 }, 00:14:31.806 { 00:14:31.806 "name": "BaseBdev3", 00:14:31.806 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:31.807 "is_configured": true, 00:14:31.807 "data_offset": 2048, 00:14:31.807 "data_size": 63488 00:14:31.807 } 00:14:31.807 ] 00:14:31.807 }' 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:31.807 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=475 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.807 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.065 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.065 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.065 "name": "raid_bdev1", 00:14:32.065 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:32.065 "strip_size_kb": 64, 00:14:32.065 "state": "online", 00:14:32.065 "raid_level": "raid5f", 00:14:32.065 "superblock": true, 00:14:32.065 "num_base_bdevs": 3, 00:14:32.065 "num_base_bdevs_discovered": 3, 00:14:32.065 "num_base_bdevs_operational": 3, 00:14:32.065 "process": { 00:14:32.065 "type": "rebuild", 00:14:32.065 "target": "spare", 00:14:32.065 "progress": { 00:14:32.065 "blocks": 22528, 00:14:32.065 "percent": 17 00:14:32.065 } 00:14:32.065 }, 00:14:32.065 "base_bdevs_list": [ 00:14:32.065 { 00:14:32.065 "name": "spare", 00:14:32.065 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:32.065 "is_configured": true, 00:14:32.065 "data_offset": 2048, 00:14:32.065 "data_size": 63488 00:14:32.065 }, 00:14:32.065 { 00:14:32.065 "name": "BaseBdev2", 00:14:32.065 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:32.065 "is_configured": true, 00:14:32.065 "data_offset": 2048, 00:14:32.065 "data_size": 63488 00:14:32.065 }, 00:14:32.065 { 00:14:32.065 "name": "BaseBdev3", 00:14:32.065 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:32.065 "is_configured": true, 00:14:32.065 "data_offset": 2048, 00:14:32.065 "data_size": 63488 00:14:32.065 } 00:14:32.065 ] 00:14:32.065 }' 00:14:32.065 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.065 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.065 02:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.065 02:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.065 02:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:33.009 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.009 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.009 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.009 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.009 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.009 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.009 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.009 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.009 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.009 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.009 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.009 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.009 "name": "raid_bdev1", 00:14:33.009 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:33.009 "strip_size_kb": 64, 00:14:33.009 "state": "online", 00:14:33.009 "raid_level": "raid5f", 00:14:33.009 "superblock": true, 00:14:33.009 "num_base_bdevs": 3, 00:14:33.009 "num_base_bdevs_discovered": 3, 00:14:33.009 "num_base_bdevs_operational": 3, 00:14:33.009 "process": { 00:14:33.009 "type": "rebuild", 00:14:33.009 "target": "spare", 00:14:33.009 "progress": { 00:14:33.009 "blocks": 45056, 00:14:33.009 "percent": 35 00:14:33.009 } 00:14:33.009 }, 00:14:33.009 "base_bdevs_list": [ 00:14:33.009 { 00:14:33.009 "name": "spare", 00:14:33.009 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:33.009 "is_configured": true, 00:14:33.009 "data_offset": 2048, 00:14:33.009 "data_size": 63488 00:14:33.009 }, 00:14:33.009 { 00:14:33.009 "name": "BaseBdev2", 00:14:33.009 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:33.009 "is_configured": true, 00:14:33.009 "data_offset": 2048, 00:14:33.009 "data_size": 63488 00:14:33.009 }, 00:14:33.009 { 00:14:33.009 "name": "BaseBdev3", 00:14:33.009 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:33.009 "is_configured": true, 00:14:33.009 "data_offset": 2048, 00:14:33.009 "data_size": 63488 00:14:33.009 } 00:14:33.009 ] 00:14:33.009 }' 00:14:33.009 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.269 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.269 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.269 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.269 02:47:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:34.207 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.208 "name": "raid_bdev1", 00:14:34.208 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:34.208 "strip_size_kb": 64, 00:14:34.208 "state": "online", 00:14:34.208 "raid_level": "raid5f", 00:14:34.208 "superblock": true, 00:14:34.208 "num_base_bdevs": 3, 00:14:34.208 "num_base_bdevs_discovered": 3, 00:14:34.208 "num_base_bdevs_operational": 3, 00:14:34.208 "process": { 00:14:34.208 "type": "rebuild", 00:14:34.208 "target": "spare", 00:14:34.208 "progress": { 00:14:34.208 "blocks": 69632, 00:14:34.208 "percent": 54 00:14:34.208 } 00:14:34.208 }, 00:14:34.208 "base_bdevs_list": [ 00:14:34.208 { 00:14:34.208 "name": "spare", 00:14:34.208 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:34.208 "is_configured": true, 00:14:34.208 "data_offset": 2048, 00:14:34.208 "data_size": 63488 00:14:34.208 }, 00:14:34.208 { 00:14:34.208 "name": "BaseBdev2", 00:14:34.208 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:34.208 "is_configured": true, 00:14:34.208 "data_offset": 2048, 00:14:34.208 "data_size": 63488 00:14:34.208 }, 00:14:34.208 { 00:14:34.208 "name": "BaseBdev3", 00:14:34.208 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:34.208 "is_configured": true, 00:14:34.208 "data_offset": 2048, 00:14:34.208 "data_size": 63488 00:14:34.208 } 00:14:34.208 ] 00:14:34.208 }' 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.208 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.466 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.466 02:47:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.401 "name": "raid_bdev1", 00:14:35.401 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:35.401 "strip_size_kb": 64, 00:14:35.401 "state": "online", 00:14:35.401 "raid_level": "raid5f", 00:14:35.401 "superblock": true, 00:14:35.401 "num_base_bdevs": 3, 00:14:35.401 "num_base_bdevs_discovered": 3, 00:14:35.401 "num_base_bdevs_operational": 3, 00:14:35.401 "process": { 00:14:35.401 "type": "rebuild", 00:14:35.401 "target": "spare", 00:14:35.401 "progress": { 00:14:35.401 "blocks": 92160, 00:14:35.401 "percent": 72 00:14:35.401 } 00:14:35.401 }, 00:14:35.401 "base_bdevs_list": [ 00:14:35.401 { 00:14:35.401 "name": "spare", 00:14:35.401 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:35.401 "is_configured": true, 00:14:35.401 "data_offset": 2048, 00:14:35.401 "data_size": 63488 00:14:35.401 }, 00:14:35.401 { 00:14:35.401 "name": "BaseBdev2", 00:14:35.401 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:35.401 "is_configured": true, 00:14:35.401 "data_offset": 2048, 00:14:35.401 "data_size": 63488 00:14:35.401 }, 00:14:35.401 { 00:14:35.401 "name": "BaseBdev3", 00:14:35.401 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:35.401 "is_configured": true, 00:14:35.401 "data_offset": 2048, 00:14:35.401 "data_size": 63488 00:14:35.401 } 00:14:35.401 ] 00:14:35.401 }' 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.401 02:47:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.777 "name": "raid_bdev1", 00:14:36.777 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:36.777 "strip_size_kb": 64, 00:14:36.777 "state": "online", 00:14:36.777 "raid_level": "raid5f", 00:14:36.777 "superblock": true, 00:14:36.777 "num_base_bdevs": 3, 00:14:36.777 "num_base_bdevs_discovered": 3, 00:14:36.777 "num_base_bdevs_operational": 3, 00:14:36.777 "process": { 00:14:36.777 "type": "rebuild", 00:14:36.777 "target": "spare", 00:14:36.777 "progress": { 00:14:36.777 "blocks": 116736, 00:14:36.777 "percent": 91 00:14:36.777 } 00:14:36.777 }, 00:14:36.777 "base_bdevs_list": [ 00:14:36.777 { 00:14:36.777 "name": "spare", 00:14:36.777 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:36.777 "is_configured": true, 00:14:36.777 "data_offset": 2048, 00:14:36.777 "data_size": 63488 00:14:36.777 }, 00:14:36.777 { 00:14:36.777 "name": "BaseBdev2", 00:14:36.777 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:36.777 "is_configured": true, 00:14:36.777 "data_offset": 2048, 00:14:36.777 "data_size": 63488 00:14:36.777 }, 00:14:36.777 { 00:14:36.777 "name": "BaseBdev3", 00:14:36.777 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:36.777 "is_configured": true, 00:14:36.777 "data_offset": 2048, 00:14:36.777 "data_size": 63488 00:14:36.777 } 00:14:36.777 ] 00:14:36.777 }' 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.777 02:47:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.036 [2024-12-07 02:47:47.974106] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:37.036 [2024-12-07 02:47:47.974178] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:37.036 [2024-12-07 02:47:47.974284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.603 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.603 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.603 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.603 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.603 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.603 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.603 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.603 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.603 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.603 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.603 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.603 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.603 "name": "raid_bdev1", 00:14:37.603 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:37.603 "strip_size_kb": 64, 00:14:37.603 "state": "online", 00:14:37.603 "raid_level": "raid5f", 00:14:37.603 "superblock": true, 00:14:37.603 "num_base_bdevs": 3, 00:14:37.603 "num_base_bdevs_discovered": 3, 00:14:37.603 "num_base_bdevs_operational": 3, 00:14:37.603 "base_bdevs_list": [ 00:14:37.603 { 00:14:37.603 "name": "spare", 00:14:37.603 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:37.603 "is_configured": true, 00:14:37.603 "data_offset": 2048, 00:14:37.603 "data_size": 63488 00:14:37.603 }, 00:14:37.603 { 00:14:37.603 "name": "BaseBdev2", 00:14:37.603 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:37.603 "is_configured": true, 00:14:37.603 "data_offset": 2048, 00:14:37.603 "data_size": 63488 00:14:37.603 }, 00:14:37.603 { 00:14:37.603 "name": "BaseBdev3", 00:14:37.603 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:37.603 "is_configured": true, 00:14:37.603 "data_offset": 2048, 00:14:37.603 "data_size": 63488 00:14:37.603 } 00:14:37.603 ] 00:14:37.603 }' 00:14:37.603 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.863 "name": "raid_bdev1", 00:14:37.863 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:37.863 "strip_size_kb": 64, 00:14:37.863 "state": "online", 00:14:37.863 "raid_level": "raid5f", 00:14:37.863 "superblock": true, 00:14:37.863 "num_base_bdevs": 3, 00:14:37.863 "num_base_bdevs_discovered": 3, 00:14:37.863 "num_base_bdevs_operational": 3, 00:14:37.863 "base_bdevs_list": [ 00:14:37.863 { 00:14:37.863 "name": "spare", 00:14:37.863 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:37.863 "is_configured": true, 00:14:37.863 "data_offset": 2048, 00:14:37.863 "data_size": 63488 00:14:37.863 }, 00:14:37.863 { 00:14:37.863 "name": "BaseBdev2", 00:14:37.863 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:37.863 "is_configured": true, 00:14:37.863 "data_offset": 2048, 00:14:37.863 "data_size": 63488 00:14:37.863 }, 00:14:37.863 { 00:14:37.863 "name": "BaseBdev3", 00:14:37.863 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:37.863 "is_configured": true, 00:14:37.863 "data_offset": 2048, 00:14:37.863 "data_size": 63488 00:14:37.863 } 00:14:37.863 ] 00:14:37.863 }' 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.863 "name": "raid_bdev1", 00:14:37.863 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:37.863 "strip_size_kb": 64, 00:14:37.863 "state": "online", 00:14:37.863 "raid_level": "raid5f", 00:14:37.863 "superblock": true, 00:14:37.863 "num_base_bdevs": 3, 00:14:37.863 "num_base_bdevs_discovered": 3, 00:14:37.863 "num_base_bdevs_operational": 3, 00:14:37.863 "base_bdevs_list": [ 00:14:37.863 { 00:14:37.863 "name": "spare", 00:14:37.863 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:37.863 "is_configured": true, 00:14:37.863 "data_offset": 2048, 00:14:37.863 "data_size": 63488 00:14:37.863 }, 00:14:37.863 { 00:14:37.863 "name": "BaseBdev2", 00:14:37.863 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:37.863 "is_configured": true, 00:14:37.863 "data_offset": 2048, 00:14:37.863 "data_size": 63488 00:14:37.863 }, 00:14:37.863 { 00:14:37.863 "name": "BaseBdev3", 00:14:37.863 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:37.863 "is_configured": true, 00:14:37.863 "data_offset": 2048, 00:14:37.863 "data_size": 63488 00:14:37.863 } 00:14:37.863 ] 00:14:37.863 }' 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.863 02:47:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.430 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:38.430 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.430 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.431 [2024-12-07 02:47:49.284565] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.431 [2024-12-07 02:47:49.284715] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.431 [2024-12-07 02:47:49.284853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.431 [2024-12-07 02:47:49.284994] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.431 [2024-12-07 02:47:49.285055] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:38.431 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:38.690 /dev/nbd0 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.690 1+0 records in 00:14:38.690 1+0 records out 00:14:38.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545551 s, 7.5 MB/s 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:38.690 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:38.949 /dev/nbd1 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.949 1+0 records in 00:14:38.949 1+0 records out 00:14:38.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420764 s, 9.7 MB/s 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.949 02:47:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:39.209 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.209 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.209 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.209 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.209 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.209 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.209 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:39.209 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.209 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.209 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.469 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.469 [2024-12-07 02:47:50.343479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:39.469 [2024-12-07 02:47:50.343615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.469 [2024-12-07 02:47:50.343646] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:39.469 [2024-12-07 02:47:50.343655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.469 [2024-12-07 02:47:50.345844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.469 [2024-12-07 02:47:50.345880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:39.469 [2024-12-07 02:47:50.345961] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:39.469 [2024-12-07 02:47:50.346006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.469 [2024-12-07 02:47:50.346115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.469 [2024-12-07 02:47:50.346208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:39.469 spare 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.470 [2024-12-07 02:47:50.446095] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:14:39.470 [2024-12-07 02:47:50.446117] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:39.470 [2024-12-07 02:47:50.446343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047560 00:14:39.470 [2024-12-07 02:47:50.446791] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:14:39.470 [2024-12-07 02:47:50.446806] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:14:39.470 [2024-12-07 02:47:50.446932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.470 "name": "raid_bdev1", 00:14:39.470 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:39.470 "strip_size_kb": 64, 00:14:39.470 "state": "online", 00:14:39.470 "raid_level": "raid5f", 00:14:39.470 "superblock": true, 00:14:39.470 "num_base_bdevs": 3, 00:14:39.470 "num_base_bdevs_discovered": 3, 00:14:39.470 "num_base_bdevs_operational": 3, 00:14:39.470 "base_bdevs_list": [ 00:14:39.470 { 00:14:39.470 "name": "spare", 00:14:39.470 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:39.470 "is_configured": true, 00:14:39.470 "data_offset": 2048, 00:14:39.470 "data_size": 63488 00:14:39.470 }, 00:14:39.470 { 00:14:39.470 "name": "BaseBdev2", 00:14:39.470 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:39.470 "is_configured": true, 00:14:39.470 "data_offset": 2048, 00:14:39.470 "data_size": 63488 00:14:39.470 }, 00:14:39.470 { 00:14:39.470 "name": "BaseBdev3", 00:14:39.470 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:39.470 "is_configured": true, 00:14:39.470 "data_offset": 2048, 00:14:39.470 "data_size": 63488 00:14:39.470 } 00:14:39.470 ] 00:14:39.470 }' 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.470 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.040 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.040 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.040 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.040 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.040 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.040 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.040 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.040 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.040 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.040 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.040 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.040 "name": "raid_bdev1", 00:14:40.040 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:40.040 "strip_size_kb": 64, 00:14:40.040 "state": "online", 00:14:40.040 "raid_level": "raid5f", 00:14:40.040 "superblock": true, 00:14:40.040 "num_base_bdevs": 3, 00:14:40.040 "num_base_bdevs_discovered": 3, 00:14:40.040 "num_base_bdevs_operational": 3, 00:14:40.040 "base_bdevs_list": [ 00:14:40.040 { 00:14:40.040 "name": "spare", 00:14:40.040 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:40.040 "is_configured": true, 00:14:40.040 "data_offset": 2048, 00:14:40.040 "data_size": 63488 00:14:40.040 }, 00:14:40.040 { 00:14:40.040 "name": "BaseBdev2", 00:14:40.040 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:40.040 "is_configured": true, 00:14:40.040 "data_offset": 2048, 00:14:40.040 "data_size": 63488 00:14:40.040 }, 00:14:40.040 { 00:14:40.040 "name": "BaseBdev3", 00:14:40.040 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:40.040 "is_configured": true, 00:14:40.040 "data_offset": 2048, 00:14:40.040 "data_size": 63488 00:14:40.040 } 00:14:40.040 ] 00:14:40.040 }' 00:14:40.040 02:47:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.040 [2024-12-07 02:47:51.095173] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.040 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.301 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.301 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.301 "name": "raid_bdev1", 00:14:40.301 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:40.301 "strip_size_kb": 64, 00:14:40.301 "state": "online", 00:14:40.301 "raid_level": "raid5f", 00:14:40.301 "superblock": true, 00:14:40.301 "num_base_bdevs": 3, 00:14:40.301 "num_base_bdevs_discovered": 2, 00:14:40.301 "num_base_bdevs_operational": 2, 00:14:40.301 "base_bdevs_list": [ 00:14:40.301 { 00:14:40.301 "name": null, 00:14:40.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.301 "is_configured": false, 00:14:40.301 "data_offset": 0, 00:14:40.301 "data_size": 63488 00:14:40.301 }, 00:14:40.301 { 00:14:40.301 "name": "BaseBdev2", 00:14:40.301 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:40.301 "is_configured": true, 00:14:40.301 "data_offset": 2048, 00:14:40.301 "data_size": 63488 00:14:40.301 }, 00:14:40.301 { 00:14:40.301 "name": "BaseBdev3", 00:14:40.301 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:40.301 "is_configured": true, 00:14:40.301 "data_offset": 2048, 00:14:40.301 "data_size": 63488 00:14:40.301 } 00:14:40.301 ] 00:14:40.301 }' 00:14:40.301 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.301 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.562 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:40.562 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.562 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.562 [2024-12-07 02:47:51.602283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.562 [2024-12-07 02:47:51.602479] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:40.562 [2024-12-07 02:47:51.602533] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:40.562 [2024-12-07 02:47:51.602593] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.562 [2024-12-07 02:47:51.606376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:14:40.562 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.562 02:47:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:40.562 [2024-12-07 02:47:51.608459] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.943 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.943 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.943 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.943 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.943 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.943 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.943 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.943 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.943 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.943 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.943 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.943 "name": "raid_bdev1", 00:14:41.943 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:41.943 "strip_size_kb": 64, 00:14:41.943 "state": "online", 00:14:41.943 "raid_level": "raid5f", 00:14:41.943 "superblock": true, 00:14:41.943 "num_base_bdevs": 3, 00:14:41.943 "num_base_bdevs_discovered": 3, 00:14:41.944 "num_base_bdevs_operational": 3, 00:14:41.944 "process": { 00:14:41.944 "type": "rebuild", 00:14:41.944 "target": "spare", 00:14:41.944 "progress": { 00:14:41.944 "blocks": 20480, 00:14:41.944 "percent": 16 00:14:41.944 } 00:14:41.944 }, 00:14:41.944 "base_bdevs_list": [ 00:14:41.944 { 00:14:41.944 "name": "spare", 00:14:41.944 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:41.944 "is_configured": true, 00:14:41.944 "data_offset": 2048, 00:14:41.944 "data_size": 63488 00:14:41.944 }, 00:14:41.944 { 00:14:41.944 "name": "BaseBdev2", 00:14:41.944 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:41.944 "is_configured": true, 00:14:41.944 "data_offset": 2048, 00:14:41.944 "data_size": 63488 00:14:41.944 }, 00:14:41.944 { 00:14:41.944 "name": "BaseBdev3", 00:14:41.944 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:41.944 "is_configured": true, 00:14:41.944 "data_offset": 2048, 00:14:41.944 "data_size": 63488 00:14:41.944 } 00:14:41.944 ] 00:14:41.944 }' 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.944 [2024-12-07 02:47:52.749249] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.944 [2024-12-07 02:47:52.815915] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:41.944 [2024-12-07 02:47:52.815971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.944 [2024-12-07 02:47:52.815989] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.944 [2024-12-07 02:47:52.815995] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.944 "name": "raid_bdev1", 00:14:41.944 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:41.944 "strip_size_kb": 64, 00:14:41.944 "state": "online", 00:14:41.944 "raid_level": "raid5f", 00:14:41.944 "superblock": true, 00:14:41.944 "num_base_bdevs": 3, 00:14:41.944 "num_base_bdevs_discovered": 2, 00:14:41.944 "num_base_bdevs_operational": 2, 00:14:41.944 "base_bdevs_list": [ 00:14:41.944 { 00:14:41.944 "name": null, 00:14:41.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.944 "is_configured": false, 00:14:41.944 "data_offset": 0, 00:14:41.944 "data_size": 63488 00:14:41.944 }, 00:14:41.944 { 00:14:41.944 "name": "BaseBdev2", 00:14:41.944 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:41.944 "is_configured": true, 00:14:41.944 "data_offset": 2048, 00:14:41.944 "data_size": 63488 00:14:41.944 }, 00:14:41.944 { 00:14:41.944 "name": "BaseBdev3", 00:14:41.944 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:41.944 "is_configured": true, 00:14:41.944 "data_offset": 2048, 00:14:41.944 "data_size": 63488 00:14:41.944 } 00:14:41.944 ] 00:14:41.944 }' 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.944 02:47:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.514 02:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:42.514 02:47:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.514 02:47:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.514 [2024-12-07 02:47:53.299893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:42.514 [2024-12-07 02:47:53.299992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.514 [2024-12-07 02:47:53.300029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:42.514 [2024-12-07 02:47:53.300056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.514 [2024-12-07 02:47:53.300524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.514 [2024-12-07 02:47:53.300591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:42.514 [2024-12-07 02:47:53.300701] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:42.514 [2024-12-07 02:47:53.300740] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:42.514 [2024-12-07 02:47:53.300780] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:42.514 [2024-12-07 02:47:53.300822] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:42.514 [2024-12-07 02:47:53.304175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:14:42.514 spare 00:14:42.514 02:47:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.514 02:47:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:42.514 [2024-12-07 02:47:53.306321] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.455 "name": "raid_bdev1", 00:14:43.455 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:43.455 "strip_size_kb": 64, 00:14:43.455 "state": "online", 00:14:43.455 "raid_level": "raid5f", 00:14:43.455 "superblock": true, 00:14:43.455 "num_base_bdevs": 3, 00:14:43.455 "num_base_bdevs_discovered": 3, 00:14:43.455 "num_base_bdevs_operational": 3, 00:14:43.455 "process": { 00:14:43.455 "type": "rebuild", 00:14:43.455 "target": "spare", 00:14:43.455 "progress": { 00:14:43.455 "blocks": 20480, 00:14:43.455 "percent": 16 00:14:43.455 } 00:14:43.455 }, 00:14:43.455 "base_bdevs_list": [ 00:14:43.455 { 00:14:43.455 "name": "spare", 00:14:43.455 "uuid": "18147565-9e60-545a-9896-d39b21f24647", 00:14:43.455 "is_configured": true, 00:14:43.455 "data_offset": 2048, 00:14:43.455 "data_size": 63488 00:14:43.455 }, 00:14:43.455 { 00:14:43.455 "name": "BaseBdev2", 00:14:43.455 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:43.455 "is_configured": true, 00:14:43.455 "data_offset": 2048, 00:14:43.455 "data_size": 63488 00:14:43.455 }, 00:14:43.455 { 00:14:43.455 "name": "BaseBdev3", 00:14:43.455 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:43.455 "is_configured": true, 00:14:43.455 "data_offset": 2048, 00:14:43.455 "data_size": 63488 00:14:43.455 } 00:14:43.455 ] 00:14:43.455 }' 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.455 [2024-12-07 02:47:54.467343] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.455 [2024-12-07 02:47:54.512756] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:43.455 [2024-12-07 02:47:54.512809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.455 [2024-12-07 02:47:54.512824] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.455 [2024-12-07 02:47:54.512834] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.455 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.715 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.715 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.715 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.715 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.715 "name": "raid_bdev1", 00:14:43.715 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:43.715 "strip_size_kb": 64, 00:14:43.715 "state": "online", 00:14:43.715 "raid_level": "raid5f", 00:14:43.715 "superblock": true, 00:14:43.715 "num_base_bdevs": 3, 00:14:43.715 "num_base_bdevs_discovered": 2, 00:14:43.715 "num_base_bdevs_operational": 2, 00:14:43.715 "base_bdevs_list": [ 00:14:43.715 { 00:14:43.715 "name": null, 00:14:43.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.715 "is_configured": false, 00:14:43.715 "data_offset": 0, 00:14:43.715 "data_size": 63488 00:14:43.715 }, 00:14:43.715 { 00:14:43.715 "name": "BaseBdev2", 00:14:43.715 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:43.715 "is_configured": true, 00:14:43.715 "data_offset": 2048, 00:14:43.715 "data_size": 63488 00:14:43.715 }, 00:14:43.715 { 00:14:43.715 "name": "BaseBdev3", 00:14:43.715 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:43.715 "is_configured": true, 00:14:43.715 "data_offset": 2048, 00:14:43.715 "data_size": 63488 00:14:43.715 } 00:14:43.715 ] 00:14:43.715 }' 00:14:43.715 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.715 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.975 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.976 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.976 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.976 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.976 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.976 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.976 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.976 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.976 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.976 02:47:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.976 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.976 "name": "raid_bdev1", 00:14:43.976 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:43.976 "strip_size_kb": 64, 00:14:43.976 "state": "online", 00:14:43.976 "raid_level": "raid5f", 00:14:43.976 "superblock": true, 00:14:43.976 "num_base_bdevs": 3, 00:14:43.976 "num_base_bdevs_discovered": 2, 00:14:43.976 "num_base_bdevs_operational": 2, 00:14:43.976 "base_bdevs_list": [ 00:14:43.976 { 00:14:43.976 "name": null, 00:14:43.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.976 "is_configured": false, 00:14:43.976 "data_offset": 0, 00:14:43.976 "data_size": 63488 00:14:43.976 }, 00:14:43.976 { 00:14:43.976 "name": "BaseBdev2", 00:14:43.976 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:43.976 "is_configured": true, 00:14:43.976 "data_offset": 2048, 00:14:43.976 "data_size": 63488 00:14:43.976 }, 00:14:43.976 { 00:14:43.976 "name": "BaseBdev3", 00:14:43.976 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:43.976 "is_configured": true, 00:14:43.976 "data_offset": 2048, 00:14:43.976 "data_size": 63488 00:14:43.976 } 00:14:43.976 ] 00:14:43.976 }' 00:14:43.976 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.234 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.234 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.234 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.234 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:44.234 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.234 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.234 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.234 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:44.234 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.234 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.234 [2024-12-07 02:47:55.100527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:44.234 [2024-12-07 02:47:55.100622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.234 [2024-12-07 02:47:55.100647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:44.234 [2024-12-07 02:47:55.100658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.234 [2024-12-07 02:47:55.101021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.234 [2024-12-07 02:47:55.101040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.234 [2024-12-07 02:47:55.101103] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:44.234 [2024-12-07 02:47:55.101118] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:44.234 [2024-12-07 02:47:55.101125] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:44.234 [2024-12-07 02:47:55.101137] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:44.234 BaseBdev1 00:14:44.234 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.234 02:47:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:45.171 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:45.171 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.171 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.171 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.171 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.171 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.172 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.172 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.172 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.172 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.172 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.172 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.172 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.172 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.172 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.172 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.172 "name": "raid_bdev1", 00:14:45.172 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:45.172 "strip_size_kb": 64, 00:14:45.172 "state": "online", 00:14:45.172 "raid_level": "raid5f", 00:14:45.172 "superblock": true, 00:14:45.172 "num_base_bdevs": 3, 00:14:45.172 "num_base_bdevs_discovered": 2, 00:14:45.172 "num_base_bdevs_operational": 2, 00:14:45.172 "base_bdevs_list": [ 00:14:45.172 { 00:14:45.172 "name": null, 00:14:45.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.172 "is_configured": false, 00:14:45.172 "data_offset": 0, 00:14:45.172 "data_size": 63488 00:14:45.172 }, 00:14:45.172 { 00:14:45.172 "name": "BaseBdev2", 00:14:45.172 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:45.172 "is_configured": true, 00:14:45.172 "data_offset": 2048, 00:14:45.172 "data_size": 63488 00:14:45.172 }, 00:14:45.172 { 00:14:45.172 "name": "BaseBdev3", 00:14:45.172 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:45.172 "is_configured": true, 00:14:45.172 "data_offset": 2048, 00:14:45.172 "data_size": 63488 00:14:45.172 } 00:14:45.172 ] 00:14:45.172 }' 00:14:45.172 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.172 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.741 "name": "raid_bdev1", 00:14:45.741 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:45.741 "strip_size_kb": 64, 00:14:45.741 "state": "online", 00:14:45.741 "raid_level": "raid5f", 00:14:45.741 "superblock": true, 00:14:45.741 "num_base_bdevs": 3, 00:14:45.741 "num_base_bdevs_discovered": 2, 00:14:45.741 "num_base_bdevs_operational": 2, 00:14:45.741 "base_bdevs_list": [ 00:14:45.741 { 00:14:45.741 "name": null, 00:14:45.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.741 "is_configured": false, 00:14:45.741 "data_offset": 0, 00:14:45.741 "data_size": 63488 00:14:45.741 }, 00:14:45.741 { 00:14:45.741 "name": "BaseBdev2", 00:14:45.741 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:45.741 "is_configured": true, 00:14:45.741 "data_offset": 2048, 00:14:45.741 "data_size": 63488 00:14:45.741 }, 00:14:45.741 { 00:14:45.741 "name": "BaseBdev3", 00:14:45.741 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:45.741 "is_configured": true, 00:14:45.741 "data_offset": 2048, 00:14:45.741 "data_size": 63488 00:14:45.741 } 00:14:45.741 ] 00:14:45.741 }' 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.741 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:45.742 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.742 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:45.742 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.742 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.742 [2024-12-07 02:47:56.657838] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.742 [2024-12-07 02:47:56.657977] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:45.742 [2024-12-07 02:47:56.657989] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:45.742 request: 00:14:45.742 { 00:14:45.742 "base_bdev": "BaseBdev1", 00:14:45.742 "raid_bdev": "raid_bdev1", 00:14:45.742 "method": "bdev_raid_add_base_bdev", 00:14:45.742 "req_id": 1 00:14:45.742 } 00:14:45.742 Got JSON-RPC error response 00:14:45.742 response: 00:14:45.742 { 00:14:45.742 "code": -22, 00:14:45.742 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:45.742 } 00:14:45.742 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:45.742 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:45.742 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.742 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.742 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.742 02:47:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.679 "name": "raid_bdev1", 00:14:46.679 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:46.679 "strip_size_kb": 64, 00:14:46.679 "state": "online", 00:14:46.679 "raid_level": "raid5f", 00:14:46.679 "superblock": true, 00:14:46.679 "num_base_bdevs": 3, 00:14:46.679 "num_base_bdevs_discovered": 2, 00:14:46.679 "num_base_bdevs_operational": 2, 00:14:46.679 "base_bdevs_list": [ 00:14:46.679 { 00:14:46.679 "name": null, 00:14:46.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.679 "is_configured": false, 00:14:46.679 "data_offset": 0, 00:14:46.679 "data_size": 63488 00:14:46.679 }, 00:14:46.679 { 00:14:46.679 "name": "BaseBdev2", 00:14:46.679 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:46.679 "is_configured": true, 00:14:46.679 "data_offset": 2048, 00:14:46.679 "data_size": 63488 00:14:46.679 }, 00:14:46.679 { 00:14:46.679 "name": "BaseBdev3", 00:14:46.679 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:46.679 "is_configured": true, 00:14:46.679 "data_offset": 2048, 00:14:46.679 "data_size": 63488 00:14:46.679 } 00:14:46.679 ] 00:14:46.679 }' 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.679 02:47:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.267 "name": "raid_bdev1", 00:14:47.267 "uuid": "e5337f0d-3fb4-4c0b-96f1-6a6f558da09e", 00:14:47.267 "strip_size_kb": 64, 00:14:47.267 "state": "online", 00:14:47.267 "raid_level": "raid5f", 00:14:47.267 "superblock": true, 00:14:47.267 "num_base_bdevs": 3, 00:14:47.267 "num_base_bdevs_discovered": 2, 00:14:47.267 "num_base_bdevs_operational": 2, 00:14:47.267 "base_bdevs_list": [ 00:14:47.267 { 00:14:47.267 "name": null, 00:14:47.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.267 "is_configured": false, 00:14:47.267 "data_offset": 0, 00:14:47.267 "data_size": 63488 00:14:47.267 }, 00:14:47.267 { 00:14:47.267 "name": "BaseBdev2", 00:14:47.267 "uuid": "dc21839a-2e30-5589-ac8f-2616c5ee2b8f", 00:14:47.267 "is_configured": true, 00:14:47.267 "data_offset": 2048, 00:14:47.267 "data_size": 63488 00:14:47.267 }, 00:14:47.267 { 00:14:47.267 "name": "BaseBdev3", 00:14:47.267 "uuid": "f08b0f61-923f-58a5-896b-26ba79fd1592", 00:14:47.267 "is_configured": true, 00:14:47.267 "data_offset": 2048, 00:14:47.267 "data_size": 63488 00:14:47.267 } 00:14:47.267 ] 00:14:47.267 }' 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92780 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92780 ']' 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92780 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92780 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92780' 00:14:47.267 killing process with pid 92780 00:14:47.267 Received shutdown signal, test time was about 60.000000 seconds 00:14:47.267 00:14:47.267 Latency(us) 00:14:47.267 [2024-12-07T02:47:58.345Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.267 [2024-12-07T02:47:58.345Z] =================================================================================================================== 00:14:47.267 [2024-12-07T02:47:58.345Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92780 00:14:47.267 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92780 00:14:47.267 [2024-12-07 02:47:58.332385] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:47.267 [2024-12-07 02:47:58.332494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.267 [2024-12-07 02:47:58.332616] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.267 [2024-12-07 02:47:58.332630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:14:47.527 [2024-12-07 02:47:58.373861] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:47.527 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:47.527 00:14:47.527 real 0m21.580s 00:14:47.527 user 0m27.975s 00:14:47.527 sys 0m2.833s 00:14:47.786 ************************************ 00:14:47.786 END TEST raid5f_rebuild_test_sb 00:14:47.786 ************************************ 00:14:47.786 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:47.786 02:47:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.786 02:47:58 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:47.786 02:47:58 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:14:47.786 02:47:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:47.786 02:47:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:47.786 02:47:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:47.786 ************************************ 00:14:47.786 START TEST raid5f_state_function_test 00:14:47.786 ************************************ 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=93516 00:14:47.786 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:47.787 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93516' 00:14:47.787 Process raid pid: 93516 00:14:47.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.787 02:47:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 93516 00:14:47.787 02:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 93516 ']' 00:14:47.787 02:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.787 02:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:47.787 02:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.787 02:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:47.787 02:47:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.787 [2024-12-07 02:47:58.780886] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:47.787 [2024-12-07 02:47:58.781015] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.047 [2024-12-07 02:47:58.942094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.047 [2024-12-07 02:47:58.987395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.047 [2024-12-07 02:47:59.029807] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.047 [2024-12-07 02:47:59.029844] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.618 [2024-12-07 02:47:59.599094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.618 [2024-12-07 02:47:59.599141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.618 [2024-12-07 02:47:59.599153] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.618 [2024-12-07 02:47:59.599162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.618 [2024-12-07 02:47:59.599168] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:48.618 [2024-12-07 02:47:59.599180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:48.618 [2024-12-07 02:47:59.599186] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:48.618 [2024-12-07 02:47:59.599194] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.618 "name": "Existed_Raid", 00:14:48.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.618 "strip_size_kb": 64, 00:14:48.618 "state": "configuring", 00:14:48.618 "raid_level": "raid5f", 00:14:48.618 "superblock": false, 00:14:48.618 "num_base_bdevs": 4, 00:14:48.618 "num_base_bdevs_discovered": 0, 00:14:48.618 "num_base_bdevs_operational": 4, 00:14:48.618 "base_bdevs_list": [ 00:14:48.618 { 00:14:48.618 "name": "BaseBdev1", 00:14:48.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.618 "is_configured": false, 00:14:48.618 "data_offset": 0, 00:14:48.618 "data_size": 0 00:14:48.618 }, 00:14:48.618 { 00:14:48.618 "name": "BaseBdev2", 00:14:48.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.618 "is_configured": false, 00:14:48.618 "data_offset": 0, 00:14:48.618 "data_size": 0 00:14:48.618 }, 00:14:48.618 { 00:14:48.618 "name": "BaseBdev3", 00:14:48.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.618 "is_configured": false, 00:14:48.618 "data_offset": 0, 00:14:48.618 "data_size": 0 00:14:48.618 }, 00:14:48.618 { 00:14:48.618 "name": "BaseBdev4", 00:14:48.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.618 "is_configured": false, 00:14:48.618 "data_offset": 0, 00:14:48.618 "data_size": 0 00:14:48.618 } 00:14:48.618 ] 00:14:48.618 }' 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.618 02:47:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.187 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.187 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.187 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.187 [2024-12-07 02:48:00.026266] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.187 [2024-12-07 02:48:00.026344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:49.187 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.187 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.188 [2024-12-07 02:48:00.034298] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:49.188 [2024-12-07 02:48:00.034375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:49.188 [2024-12-07 02:48:00.034401] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.188 [2024-12-07 02:48:00.034424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.188 [2024-12-07 02:48:00.034442] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:49.188 [2024-12-07 02:48:00.034463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:49.188 [2024-12-07 02:48:00.034481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:49.188 [2024-12-07 02:48:00.034502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.188 [2024-12-07 02:48:00.051267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.188 BaseBdev1 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.188 [ 00:14:49.188 { 00:14:49.188 "name": "BaseBdev1", 00:14:49.188 "aliases": [ 00:14:49.188 "d3019cac-8e6d-47a8-8f65-06da44328b0a" 00:14:49.188 ], 00:14:49.188 "product_name": "Malloc disk", 00:14:49.188 "block_size": 512, 00:14:49.188 "num_blocks": 65536, 00:14:49.188 "uuid": "d3019cac-8e6d-47a8-8f65-06da44328b0a", 00:14:49.188 "assigned_rate_limits": { 00:14:49.188 "rw_ios_per_sec": 0, 00:14:49.188 "rw_mbytes_per_sec": 0, 00:14:49.188 "r_mbytes_per_sec": 0, 00:14:49.188 "w_mbytes_per_sec": 0 00:14:49.188 }, 00:14:49.188 "claimed": true, 00:14:49.188 "claim_type": "exclusive_write", 00:14:49.188 "zoned": false, 00:14:49.188 "supported_io_types": { 00:14:49.188 "read": true, 00:14:49.188 "write": true, 00:14:49.188 "unmap": true, 00:14:49.188 "flush": true, 00:14:49.188 "reset": true, 00:14:49.188 "nvme_admin": false, 00:14:49.188 "nvme_io": false, 00:14:49.188 "nvme_io_md": false, 00:14:49.188 "write_zeroes": true, 00:14:49.188 "zcopy": true, 00:14:49.188 "get_zone_info": false, 00:14:49.188 "zone_management": false, 00:14:49.188 "zone_append": false, 00:14:49.188 "compare": false, 00:14:49.188 "compare_and_write": false, 00:14:49.188 "abort": true, 00:14:49.188 "seek_hole": false, 00:14:49.188 "seek_data": false, 00:14:49.188 "copy": true, 00:14:49.188 "nvme_iov_md": false 00:14:49.188 }, 00:14:49.188 "memory_domains": [ 00:14:49.188 { 00:14:49.188 "dma_device_id": "system", 00:14:49.188 "dma_device_type": 1 00:14:49.188 }, 00:14:49.188 { 00:14:49.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.188 "dma_device_type": 2 00:14:49.188 } 00:14:49.188 ], 00:14:49.188 "driver_specific": {} 00:14:49.188 } 00:14:49.188 ] 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.188 "name": "Existed_Raid", 00:14:49.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.188 "strip_size_kb": 64, 00:14:49.188 "state": "configuring", 00:14:49.188 "raid_level": "raid5f", 00:14:49.188 "superblock": false, 00:14:49.188 "num_base_bdevs": 4, 00:14:49.188 "num_base_bdevs_discovered": 1, 00:14:49.188 "num_base_bdevs_operational": 4, 00:14:49.188 "base_bdevs_list": [ 00:14:49.188 { 00:14:49.188 "name": "BaseBdev1", 00:14:49.188 "uuid": "d3019cac-8e6d-47a8-8f65-06da44328b0a", 00:14:49.188 "is_configured": true, 00:14:49.188 "data_offset": 0, 00:14:49.188 "data_size": 65536 00:14:49.188 }, 00:14:49.188 { 00:14:49.188 "name": "BaseBdev2", 00:14:49.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.188 "is_configured": false, 00:14:49.188 "data_offset": 0, 00:14:49.188 "data_size": 0 00:14:49.188 }, 00:14:49.188 { 00:14:49.188 "name": "BaseBdev3", 00:14:49.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.188 "is_configured": false, 00:14:49.188 "data_offset": 0, 00:14:49.188 "data_size": 0 00:14:49.188 }, 00:14:49.188 { 00:14:49.188 "name": "BaseBdev4", 00:14:49.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.188 "is_configured": false, 00:14:49.188 "data_offset": 0, 00:14:49.188 "data_size": 0 00:14:49.188 } 00:14:49.188 ] 00:14:49.188 }' 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.188 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.449 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:49.449 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.449 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.449 [2024-12-07 02:48:00.510554] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:49.449 [2024-12-07 02:48:00.510645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:49.449 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.449 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:49.449 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.449 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.449 [2024-12-07 02:48:00.522563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.449 [2024-12-07 02:48:00.524384] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:49.449 [2024-12-07 02:48:00.524456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:49.449 [2024-12-07 02:48:00.524481] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:49.449 [2024-12-07 02:48:00.524504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:49.449 [2024-12-07 02:48:00.524521] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:49.449 [2024-12-07 02:48:00.524541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.717 "name": "Existed_Raid", 00:14:49.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.717 "strip_size_kb": 64, 00:14:49.717 "state": "configuring", 00:14:49.717 "raid_level": "raid5f", 00:14:49.717 "superblock": false, 00:14:49.717 "num_base_bdevs": 4, 00:14:49.717 "num_base_bdevs_discovered": 1, 00:14:49.717 "num_base_bdevs_operational": 4, 00:14:49.717 "base_bdevs_list": [ 00:14:49.717 { 00:14:49.717 "name": "BaseBdev1", 00:14:49.717 "uuid": "d3019cac-8e6d-47a8-8f65-06da44328b0a", 00:14:49.717 "is_configured": true, 00:14:49.717 "data_offset": 0, 00:14:49.717 "data_size": 65536 00:14:49.717 }, 00:14:49.717 { 00:14:49.717 "name": "BaseBdev2", 00:14:49.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.717 "is_configured": false, 00:14:49.717 "data_offset": 0, 00:14:49.717 "data_size": 0 00:14:49.717 }, 00:14:49.717 { 00:14:49.717 "name": "BaseBdev3", 00:14:49.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.717 "is_configured": false, 00:14:49.717 "data_offset": 0, 00:14:49.717 "data_size": 0 00:14:49.717 }, 00:14:49.717 { 00:14:49.717 "name": "BaseBdev4", 00:14:49.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.717 "is_configured": false, 00:14:49.717 "data_offset": 0, 00:14:49.717 "data_size": 0 00:14:49.717 } 00:14:49.717 ] 00:14:49.717 }' 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.717 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.979 02:48:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:49.979 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.979 02:48:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.979 [2024-12-07 02:48:01.010219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.979 BaseBdev2 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.979 [ 00:14:49.979 { 00:14:49.979 "name": "BaseBdev2", 00:14:49.979 "aliases": [ 00:14:49.979 "47615b8f-ede0-47c0-8dd9-2ff0b0c71f92" 00:14:49.979 ], 00:14:49.979 "product_name": "Malloc disk", 00:14:49.979 "block_size": 512, 00:14:49.979 "num_blocks": 65536, 00:14:49.979 "uuid": "47615b8f-ede0-47c0-8dd9-2ff0b0c71f92", 00:14:49.979 "assigned_rate_limits": { 00:14:49.979 "rw_ios_per_sec": 0, 00:14:49.979 "rw_mbytes_per_sec": 0, 00:14:49.979 "r_mbytes_per_sec": 0, 00:14:49.979 "w_mbytes_per_sec": 0 00:14:49.979 }, 00:14:49.979 "claimed": true, 00:14:49.979 "claim_type": "exclusive_write", 00:14:49.979 "zoned": false, 00:14:49.979 "supported_io_types": { 00:14:49.979 "read": true, 00:14:49.979 "write": true, 00:14:49.979 "unmap": true, 00:14:49.979 "flush": true, 00:14:49.979 "reset": true, 00:14:49.979 "nvme_admin": false, 00:14:49.979 "nvme_io": false, 00:14:49.979 "nvme_io_md": false, 00:14:49.979 "write_zeroes": true, 00:14:49.979 "zcopy": true, 00:14:49.979 "get_zone_info": false, 00:14:49.979 "zone_management": false, 00:14:49.979 "zone_append": false, 00:14:49.979 "compare": false, 00:14:49.979 "compare_and_write": false, 00:14:49.979 "abort": true, 00:14:49.979 "seek_hole": false, 00:14:49.979 "seek_data": false, 00:14:49.979 "copy": true, 00:14:49.979 "nvme_iov_md": false 00:14:49.979 }, 00:14:49.979 "memory_domains": [ 00:14:49.979 { 00:14:49.979 "dma_device_id": "system", 00:14:49.979 "dma_device_type": 1 00:14:49.979 }, 00:14:49.979 { 00:14:49.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.979 "dma_device_type": 2 00:14:49.979 } 00:14:49.979 ], 00:14:49.979 "driver_specific": {} 00:14:49.979 } 00:14:49.979 ] 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.979 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.237 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.237 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.237 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.237 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.237 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.237 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.237 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.237 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.237 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.237 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.237 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.237 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.237 "name": "Existed_Raid", 00:14:50.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.237 "strip_size_kb": 64, 00:14:50.237 "state": "configuring", 00:14:50.237 "raid_level": "raid5f", 00:14:50.237 "superblock": false, 00:14:50.237 "num_base_bdevs": 4, 00:14:50.237 "num_base_bdevs_discovered": 2, 00:14:50.237 "num_base_bdevs_operational": 4, 00:14:50.237 "base_bdevs_list": [ 00:14:50.237 { 00:14:50.237 "name": "BaseBdev1", 00:14:50.237 "uuid": "d3019cac-8e6d-47a8-8f65-06da44328b0a", 00:14:50.237 "is_configured": true, 00:14:50.238 "data_offset": 0, 00:14:50.238 "data_size": 65536 00:14:50.238 }, 00:14:50.238 { 00:14:50.238 "name": "BaseBdev2", 00:14:50.238 "uuid": "47615b8f-ede0-47c0-8dd9-2ff0b0c71f92", 00:14:50.238 "is_configured": true, 00:14:50.238 "data_offset": 0, 00:14:50.238 "data_size": 65536 00:14:50.238 }, 00:14:50.238 { 00:14:50.238 "name": "BaseBdev3", 00:14:50.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.238 "is_configured": false, 00:14:50.238 "data_offset": 0, 00:14:50.238 "data_size": 0 00:14:50.238 }, 00:14:50.238 { 00:14:50.238 "name": "BaseBdev4", 00:14:50.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.238 "is_configured": false, 00:14:50.238 "data_offset": 0, 00:14:50.238 "data_size": 0 00:14:50.238 } 00:14:50.238 ] 00:14:50.238 }' 00:14:50.238 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.238 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.497 [2024-12-07 02:48:01.540694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.497 BaseBdev3 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.497 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.497 [ 00:14:50.497 { 00:14:50.497 "name": "BaseBdev3", 00:14:50.497 "aliases": [ 00:14:50.497 "ca58f5e0-cc31-4978-ba6c-978258c6e668" 00:14:50.497 ], 00:14:50.497 "product_name": "Malloc disk", 00:14:50.497 "block_size": 512, 00:14:50.497 "num_blocks": 65536, 00:14:50.497 "uuid": "ca58f5e0-cc31-4978-ba6c-978258c6e668", 00:14:50.497 "assigned_rate_limits": { 00:14:50.497 "rw_ios_per_sec": 0, 00:14:50.497 "rw_mbytes_per_sec": 0, 00:14:50.497 "r_mbytes_per_sec": 0, 00:14:50.497 "w_mbytes_per_sec": 0 00:14:50.497 }, 00:14:50.497 "claimed": true, 00:14:50.497 "claim_type": "exclusive_write", 00:14:50.497 "zoned": false, 00:14:50.497 "supported_io_types": { 00:14:50.497 "read": true, 00:14:50.497 "write": true, 00:14:50.497 "unmap": true, 00:14:50.497 "flush": true, 00:14:50.497 "reset": true, 00:14:50.497 "nvme_admin": false, 00:14:50.497 "nvme_io": false, 00:14:50.497 "nvme_io_md": false, 00:14:50.497 "write_zeroes": true, 00:14:50.497 "zcopy": true, 00:14:50.497 "get_zone_info": false, 00:14:50.497 "zone_management": false, 00:14:50.497 "zone_append": false, 00:14:50.497 "compare": false, 00:14:50.497 "compare_and_write": false, 00:14:50.756 "abort": true, 00:14:50.756 "seek_hole": false, 00:14:50.756 "seek_data": false, 00:14:50.756 "copy": true, 00:14:50.756 "nvme_iov_md": false 00:14:50.756 }, 00:14:50.756 "memory_domains": [ 00:14:50.756 { 00:14:50.756 "dma_device_id": "system", 00:14:50.756 "dma_device_type": 1 00:14:50.756 }, 00:14:50.756 { 00:14:50.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.756 "dma_device_type": 2 00:14:50.756 } 00:14:50.756 ], 00:14:50.756 "driver_specific": {} 00:14:50.756 } 00:14:50.756 ] 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.756 "name": "Existed_Raid", 00:14:50.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.756 "strip_size_kb": 64, 00:14:50.756 "state": "configuring", 00:14:50.756 "raid_level": "raid5f", 00:14:50.756 "superblock": false, 00:14:50.756 "num_base_bdevs": 4, 00:14:50.756 "num_base_bdevs_discovered": 3, 00:14:50.756 "num_base_bdevs_operational": 4, 00:14:50.756 "base_bdevs_list": [ 00:14:50.756 { 00:14:50.756 "name": "BaseBdev1", 00:14:50.756 "uuid": "d3019cac-8e6d-47a8-8f65-06da44328b0a", 00:14:50.756 "is_configured": true, 00:14:50.756 "data_offset": 0, 00:14:50.756 "data_size": 65536 00:14:50.756 }, 00:14:50.756 { 00:14:50.756 "name": "BaseBdev2", 00:14:50.756 "uuid": "47615b8f-ede0-47c0-8dd9-2ff0b0c71f92", 00:14:50.756 "is_configured": true, 00:14:50.756 "data_offset": 0, 00:14:50.756 "data_size": 65536 00:14:50.756 }, 00:14:50.756 { 00:14:50.756 "name": "BaseBdev3", 00:14:50.756 "uuid": "ca58f5e0-cc31-4978-ba6c-978258c6e668", 00:14:50.756 "is_configured": true, 00:14:50.756 "data_offset": 0, 00:14:50.756 "data_size": 65536 00:14:50.756 }, 00:14:50.756 { 00:14:50.756 "name": "BaseBdev4", 00:14:50.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.756 "is_configured": false, 00:14:50.756 "data_offset": 0, 00:14:50.756 "data_size": 0 00:14:50.756 } 00:14:50.756 ] 00:14:50.756 }' 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.756 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.016 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:51.016 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.016 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.016 [2024-12-07 02:48:01.995065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:51.016 [2024-12-07 02:48:01.995168] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:14:51.016 [2024-12-07 02:48:01.995191] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:51.016 [2024-12-07 02:48:01.995481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:51.016 [2024-12-07 02:48:01.995971] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:14:51.016 [2024-12-07 02:48:01.995987] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:14:51.016 [2024-12-07 02:48:01.996184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.016 BaseBdev4 00:14:51.016 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.016 02:48:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:51.016 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:51.016 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:51.016 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:51.016 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:51.016 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:51.016 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:51.016 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.016 02:48:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.017 [ 00:14:51.017 { 00:14:51.017 "name": "BaseBdev4", 00:14:51.017 "aliases": [ 00:14:51.017 "825f6806-3e83-4b8c-b420-830ccc7f737f" 00:14:51.017 ], 00:14:51.017 "product_name": "Malloc disk", 00:14:51.017 "block_size": 512, 00:14:51.017 "num_blocks": 65536, 00:14:51.017 "uuid": "825f6806-3e83-4b8c-b420-830ccc7f737f", 00:14:51.017 "assigned_rate_limits": { 00:14:51.017 "rw_ios_per_sec": 0, 00:14:51.017 "rw_mbytes_per_sec": 0, 00:14:51.017 "r_mbytes_per_sec": 0, 00:14:51.017 "w_mbytes_per_sec": 0 00:14:51.017 }, 00:14:51.017 "claimed": true, 00:14:51.017 "claim_type": "exclusive_write", 00:14:51.017 "zoned": false, 00:14:51.017 "supported_io_types": { 00:14:51.017 "read": true, 00:14:51.017 "write": true, 00:14:51.017 "unmap": true, 00:14:51.017 "flush": true, 00:14:51.017 "reset": true, 00:14:51.017 "nvme_admin": false, 00:14:51.017 "nvme_io": false, 00:14:51.017 "nvme_io_md": false, 00:14:51.017 "write_zeroes": true, 00:14:51.017 "zcopy": true, 00:14:51.017 "get_zone_info": false, 00:14:51.017 "zone_management": false, 00:14:51.017 "zone_append": false, 00:14:51.017 "compare": false, 00:14:51.017 "compare_and_write": false, 00:14:51.017 "abort": true, 00:14:51.017 "seek_hole": false, 00:14:51.017 "seek_data": false, 00:14:51.017 "copy": true, 00:14:51.017 "nvme_iov_md": false 00:14:51.017 }, 00:14:51.017 "memory_domains": [ 00:14:51.017 { 00:14:51.017 "dma_device_id": "system", 00:14:51.017 "dma_device_type": 1 00:14:51.017 }, 00:14:51.017 { 00:14:51.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.017 "dma_device_type": 2 00:14:51.017 } 00:14:51.017 ], 00:14:51.017 "driver_specific": {} 00:14:51.017 } 00:14:51.017 ] 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.017 "name": "Existed_Raid", 00:14:51.017 "uuid": "2290e930-45f4-4890-a6e9-d75ccfdd0d5c", 00:14:51.017 "strip_size_kb": 64, 00:14:51.017 "state": "online", 00:14:51.017 "raid_level": "raid5f", 00:14:51.017 "superblock": false, 00:14:51.017 "num_base_bdevs": 4, 00:14:51.017 "num_base_bdevs_discovered": 4, 00:14:51.017 "num_base_bdevs_operational": 4, 00:14:51.017 "base_bdevs_list": [ 00:14:51.017 { 00:14:51.017 "name": "BaseBdev1", 00:14:51.017 "uuid": "d3019cac-8e6d-47a8-8f65-06da44328b0a", 00:14:51.017 "is_configured": true, 00:14:51.017 "data_offset": 0, 00:14:51.017 "data_size": 65536 00:14:51.017 }, 00:14:51.017 { 00:14:51.017 "name": "BaseBdev2", 00:14:51.017 "uuid": "47615b8f-ede0-47c0-8dd9-2ff0b0c71f92", 00:14:51.017 "is_configured": true, 00:14:51.017 "data_offset": 0, 00:14:51.017 "data_size": 65536 00:14:51.017 }, 00:14:51.017 { 00:14:51.017 "name": "BaseBdev3", 00:14:51.017 "uuid": "ca58f5e0-cc31-4978-ba6c-978258c6e668", 00:14:51.017 "is_configured": true, 00:14:51.017 "data_offset": 0, 00:14:51.017 "data_size": 65536 00:14:51.017 }, 00:14:51.017 { 00:14:51.017 "name": "BaseBdev4", 00:14:51.017 "uuid": "825f6806-3e83-4b8c-b420-830ccc7f737f", 00:14:51.017 "is_configured": true, 00:14:51.017 "data_offset": 0, 00:14:51.017 "data_size": 65536 00:14:51.017 } 00:14:51.017 ] 00:14:51.017 }' 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.017 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.586 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:51.586 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:51.586 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:51.586 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:51.586 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:51.586 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:51.586 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:51.586 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:51.586 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.586 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.586 [2024-12-07 02:48:02.494470] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:51.586 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.586 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:51.586 "name": "Existed_Raid", 00:14:51.586 "aliases": [ 00:14:51.586 "2290e930-45f4-4890-a6e9-d75ccfdd0d5c" 00:14:51.586 ], 00:14:51.586 "product_name": "Raid Volume", 00:14:51.586 "block_size": 512, 00:14:51.586 "num_blocks": 196608, 00:14:51.586 "uuid": "2290e930-45f4-4890-a6e9-d75ccfdd0d5c", 00:14:51.586 "assigned_rate_limits": { 00:14:51.586 "rw_ios_per_sec": 0, 00:14:51.586 "rw_mbytes_per_sec": 0, 00:14:51.586 "r_mbytes_per_sec": 0, 00:14:51.586 "w_mbytes_per_sec": 0 00:14:51.586 }, 00:14:51.586 "claimed": false, 00:14:51.586 "zoned": false, 00:14:51.586 "supported_io_types": { 00:14:51.586 "read": true, 00:14:51.586 "write": true, 00:14:51.586 "unmap": false, 00:14:51.586 "flush": false, 00:14:51.586 "reset": true, 00:14:51.586 "nvme_admin": false, 00:14:51.586 "nvme_io": false, 00:14:51.586 "nvme_io_md": false, 00:14:51.586 "write_zeroes": true, 00:14:51.586 "zcopy": false, 00:14:51.586 "get_zone_info": false, 00:14:51.586 "zone_management": false, 00:14:51.586 "zone_append": false, 00:14:51.586 "compare": false, 00:14:51.587 "compare_and_write": false, 00:14:51.587 "abort": false, 00:14:51.587 "seek_hole": false, 00:14:51.587 "seek_data": false, 00:14:51.587 "copy": false, 00:14:51.587 "nvme_iov_md": false 00:14:51.587 }, 00:14:51.587 "driver_specific": { 00:14:51.587 "raid": { 00:14:51.587 "uuid": "2290e930-45f4-4890-a6e9-d75ccfdd0d5c", 00:14:51.587 "strip_size_kb": 64, 00:14:51.587 "state": "online", 00:14:51.587 "raid_level": "raid5f", 00:14:51.587 "superblock": false, 00:14:51.587 "num_base_bdevs": 4, 00:14:51.587 "num_base_bdevs_discovered": 4, 00:14:51.587 "num_base_bdevs_operational": 4, 00:14:51.587 "base_bdevs_list": [ 00:14:51.587 { 00:14:51.587 "name": "BaseBdev1", 00:14:51.587 "uuid": "d3019cac-8e6d-47a8-8f65-06da44328b0a", 00:14:51.587 "is_configured": true, 00:14:51.587 "data_offset": 0, 00:14:51.587 "data_size": 65536 00:14:51.587 }, 00:14:51.587 { 00:14:51.587 "name": "BaseBdev2", 00:14:51.587 "uuid": "47615b8f-ede0-47c0-8dd9-2ff0b0c71f92", 00:14:51.587 "is_configured": true, 00:14:51.587 "data_offset": 0, 00:14:51.587 "data_size": 65536 00:14:51.587 }, 00:14:51.587 { 00:14:51.587 "name": "BaseBdev3", 00:14:51.587 "uuid": "ca58f5e0-cc31-4978-ba6c-978258c6e668", 00:14:51.587 "is_configured": true, 00:14:51.587 "data_offset": 0, 00:14:51.587 "data_size": 65536 00:14:51.587 }, 00:14:51.587 { 00:14:51.587 "name": "BaseBdev4", 00:14:51.587 "uuid": "825f6806-3e83-4b8c-b420-830ccc7f737f", 00:14:51.587 "is_configured": true, 00:14:51.587 "data_offset": 0, 00:14:51.587 "data_size": 65536 00:14:51.587 } 00:14:51.587 ] 00:14:51.587 } 00:14:51.587 } 00:14:51.587 }' 00:14:51.587 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:51.587 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:51.587 BaseBdev2 00:14:51.587 BaseBdev3 00:14:51.587 BaseBdev4' 00:14:51.587 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.587 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:51.587 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.587 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.587 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:51.587 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.587 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.587 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.587 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.587 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.846 [2024-12-07 02:48:02.813749] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.846 "name": "Existed_Raid", 00:14:51.846 "uuid": "2290e930-45f4-4890-a6e9-d75ccfdd0d5c", 00:14:51.846 "strip_size_kb": 64, 00:14:51.846 "state": "online", 00:14:51.846 "raid_level": "raid5f", 00:14:51.846 "superblock": false, 00:14:51.846 "num_base_bdevs": 4, 00:14:51.846 "num_base_bdevs_discovered": 3, 00:14:51.846 "num_base_bdevs_operational": 3, 00:14:51.846 "base_bdevs_list": [ 00:14:51.846 { 00:14:51.846 "name": null, 00:14:51.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.846 "is_configured": false, 00:14:51.846 "data_offset": 0, 00:14:51.846 "data_size": 65536 00:14:51.846 }, 00:14:51.846 { 00:14:51.846 "name": "BaseBdev2", 00:14:51.846 "uuid": "47615b8f-ede0-47c0-8dd9-2ff0b0c71f92", 00:14:51.846 "is_configured": true, 00:14:51.846 "data_offset": 0, 00:14:51.846 "data_size": 65536 00:14:51.846 }, 00:14:51.846 { 00:14:51.846 "name": "BaseBdev3", 00:14:51.846 "uuid": "ca58f5e0-cc31-4978-ba6c-978258c6e668", 00:14:51.846 "is_configured": true, 00:14:51.846 "data_offset": 0, 00:14:51.846 "data_size": 65536 00:14:51.846 }, 00:14:51.846 { 00:14:51.846 "name": "BaseBdev4", 00:14:51.846 "uuid": "825f6806-3e83-4b8c-b420-830ccc7f737f", 00:14:51.846 "is_configured": true, 00:14:51.846 "data_offset": 0, 00:14:51.846 "data_size": 65536 00:14:51.846 } 00:14:51.846 ] 00:14:51.846 }' 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.846 02:48:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.414 [2024-12-07 02:48:03.328083] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:52.414 [2024-12-07 02:48:03.328213] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.414 [2024-12-07 02:48:03.339425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.414 [2024-12-07 02:48:03.395344] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.414 [2024-12-07 02:48:03.466462] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:52.414 [2024-12-07 02:48:03.466550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:52.414 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.675 BaseBdev2 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.675 [ 00:14:52.675 { 00:14:52.675 "name": "BaseBdev2", 00:14:52.675 "aliases": [ 00:14:52.675 "73d93fa2-aa59-4d07-8ab7-3972ee98ca8e" 00:14:52.675 ], 00:14:52.675 "product_name": "Malloc disk", 00:14:52.675 "block_size": 512, 00:14:52.675 "num_blocks": 65536, 00:14:52.675 "uuid": "73d93fa2-aa59-4d07-8ab7-3972ee98ca8e", 00:14:52.675 "assigned_rate_limits": { 00:14:52.675 "rw_ios_per_sec": 0, 00:14:52.675 "rw_mbytes_per_sec": 0, 00:14:52.675 "r_mbytes_per_sec": 0, 00:14:52.675 "w_mbytes_per_sec": 0 00:14:52.675 }, 00:14:52.675 "claimed": false, 00:14:52.675 "zoned": false, 00:14:52.675 "supported_io_types": { 00:14:52.675 "read": true, 00:14:52.675 "write": true, 00:14:52.675 "unmap": true, 00:14:52.675 "flush": true, 00:14:52.675 "reset": true, 00:14:52.675 "nvme_admin": false, 00:14:52.675 "nvme_io": false, 00:14:52.675 "nvme_io_md": false, 00:14:52.675 "write_zeroes": true, 00:14:52.675 "zcopy": true, 00:14:52.675 "get_zone_info": false, 00:14:52.675 "zone_management": false, 00:14:52.675 "zone_append": false, 00:14:52.675 "compare": false, 00:14:52.675 "compare_and_write": false, 00:14:52.675 "abort": true, 00:14:52.675 "seek_hole": false, 00:14:52.675 "seek_data": false, 00:14:52.675 "copy": true, 00:14:52.675 "nvme_iov_md": false 00:14:52.675 }, 00:14:52.675 "memory_domains": [ 00:14:52.675 { 00:14:52.675 "dma_device_id": "system", 00:14:52.675 "dma_device_type": 1 00:14:52.675 }, 00:14:52.675 { 00:14:52.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.675 "dma_device_type": 2 00:14:52.675 } 00:14:52.675 ], 00:14:52.675 "driver_specific": {} 00:14:52.675 } 00:14:52.675 ] 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.675 BaseBdev3 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.675 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.675 [ 00:14:52.675 { 00:14:52.675 "name": "BaseBdev3", 00:14:52.675 "aliases": [ 00:14:52.675 "c0335b7e-a241-48a3-bd8a-7194ae053c5b" 00:14:52.675 ], 00:14:52.675 "product_name": "Malloc disk", 00:14:52.675 "block_size": 512, 00:14:52.675 "num_blocks": 65536, 00:14:52.675 "uuid": "c0335b7e-a241-48a3-bd8a-7194ae053c5b", 00:14:52.675 "assigned_rate_limits": { 00:14:52.675 "rw_ios_per_sec": 0, 00:14:52.675 "rw_mbytes_per_sec": 0, 00:14:52.675 "r_mbytes_per_sec": 0, 00:14:52.675 "w_mbytes_per_sec": 0 00:14:52.675 }, 00:14:52.675 "claimed": false, 00:14:52.675 "zoned": false, 00:14:52.675 "supported_io_types": { 00:14:52.675 "read": true, 00:14:52.675 "write": true, 00:14:52.675 "unmap": true, 00:14:52.675 "flush": true, 00:14:52.675 "reset": true, 00:14:52.675 "nvme_admin": false, 00:14:52.675 "nvme_io": false, 00:14:52.675 "nvme_io_md": false, 00:14:52.676 "write_zeroes": true, 00:14:52.676 "zcopy": true, 00:14:52.676 "get_zone_info": false, 00:14:52.676 "zone_management": false, 00:14:52.676 "zone_append": false, 00:14:52.676 "compare": false, 00:14:52.676 "compare_and_write": false, 00:14:52.676 "abort": true, 00:14:52.676 "seek_hole": false, 00:14:52.676 "seek_data": false, 00:14:52.676 "copy": true, 00:14:52.676 "nvme_iov_md": false 00:14:52.676 }, 00:14:52.676 "memory_domains": [ 00:14:52.676 { 00:14:52.676 "dma_device_id": "system", 00:14:52.676 "dma_device_type": 1 00:14:52.676 }, 00:14:52.676 { 00:14:52.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.676 "dma_device_type": 2 00:14:52.676 } 00:14:52.676 ], 00:14:52.676 "driver_specific": {} 00:14:52.676 } 00:14:52.676 ] 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.676 BaseBdev4 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.676 [ 00:14:52.676 { 00:14:52.676 "name": "BaseBdev4", 00:14:52.676 "aliases": [ 00:14:52.676 "7e151a3e-79e8-4a46-b3fe-8df29e693c12" 00:14:52.676 ], 00:14:52.676 "product_name": "Malloc disk", 00:14:52.676 "block_size": 512, 00:14:52.676 "num_blocks": 65536, 00:14:52.676 "uuid": "7e151a3e-79e8-4a46-b3fe-8df29e693c12", 00:14:52.676 "assigned_rate_limits": { 00:14:52.676 "rw_ios_per_sec": 0, 00:14:52.676 "rw_mbytes_per_sec": 0, 00:14:52.676 "r_mbytes_per_sec": 0, 00:14:52.676 "w_mbytes_per_sec": 0 00:14:52.676 }, 00:14:52.676 "claimed": false, 00:14:52.676 "zoned": false, 00:14:52.676 "supported_io_types": { 00:14:52.676 "read": true, 00:14:52.676 "write": true, 00:14:52.676 "unmap": true, 00:14:52.676 "flush": true, 00:14:52.676 "reset": true, 00:14:52.676 "nvme_admin": false, 00:14:52.676 "nvme_io": false, 00:14:52.676 "nvme_io_md": false, 00:14:52.676 "write_zeroes": true, 00:14:52.676 "zcopy": true, 00:14:52.676 "get_zone_info": false, 00:14:52.676 "zone_management": false, 00:14:52.676 "zone_append": false, 00:14:52.676 "compare": false, 00:14:52.676 "compare_and_write": false, 00:14:52.676 "abort": true, 00:14:52.676 "seek_hole": false, 00:14:52.676 "seek_data": false, 00:14:52.676 "copy": true, 00:14:52.676 "nvme_iov_md": false 00:14:52.676 }, 00:14:52.676 "memory_domains": [ 00:14:52.676 { 00:14:52.676 "dma_device_id": "system", 00:14:52.676 "dma_device_type": 1 00:14:52.676 }, 00:14:52.676 { 00:14:52.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.676 "dma_device_type": 2 00:14:52.676 } 00:14:52.676 ], 00:14:52.676 "driver_specific": {} 00:14:52.676 } 00:14:52.676 ] 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.676 [2024-12-07 02:48:03.697669] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.676 [2024-12-07 02:48:03.697750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.676 [2024-12-07 02:48:03.697789] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.676 [2024-12-07 02:48:03.699601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.676 [2024-12-07 02:48:03.699687] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.676 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.677 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.677 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.677 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.677 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.935 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.936 "name": "Existed_Raid", 00:14:52.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.936 "strip_size_kb": 64, 00:14:52.936 "state": "configuring", 00:14:52.936 "raid_level": "raid5f", 00:14:52.936 "superblock": false, 00:14:52.936 "num_base_bdevs": 4, 00:14:52.936 "num_base_bdevs_discovered": 3, 00:14:52.936 "num_base_bdevs_operational": 4, 00:14:52.936 "base_bdevs_list": [ 00:14:52.936 { 00:14:52.936 "name": "BaseBdev1", 00:14:52.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.936 "is_configured": false, 00:14:52.936 "data_offset": 0, 00:14:52.936 "data_size": 0 00:14:52.936 }, 00:14:52.936 { 00:14:52.936 "name": "BaseBdev2", 00:14:52.936 "uuid": "73d93fa2-aa59-4d07-8ab7-3972ee98ca8e", 00:14:52.936 "is_configured": true, 00:14:52.936 "data_offset": 0, 00:14:52.936 "data_size": 65536 00:14:52.936 }, 00:14:52.936 { 00:14:52.936 "name": "BaseBdev3", 00:14:52.936 "uuid": "c0335b7e-a241-48a3-bd8a-7194ae053c5b", 00:14:52.936 "is_configured": true, 00:14:52.936 "data_offset": 0, 00:14:52.936 "data_size": 65536 00:14:52.936 }, 00:14:52.936 { 00:14:52.936 "name": "BaseBdev4", 00:14:52.936 "uuid": "7e151a3e-79e8-4a46-b3fe-8df29e693c12", 00:14:52.936 "is_configured": true, 00:14:52.936 "data_offset": 0, 00:14:52.936 "data_size": 65536 00:14:52.936 } 00:14:52.936 ] 00:14:52.936 }' 00:14:52.936 02:48:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.936 02:48:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.195 [2024-12-07 02:48:04.152843] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.195 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.195 "name": "Existed_Raid", 00:14:53.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.195 "strip_size_kb": 64, 00:14:53.195 "state": "configuring", 00:14:53.195 "raid_level": "raid5f", 00:14:53.195 "superblock": false, 00:14:53.195 "num_base_bdevs": 4, 00:14:53.195 "num_base_bdevs_discovered": 2, 00:14:53.195 "num_base_bdevs_operational": 4, 00:14:53.195 "base_bdevs_list": [ 00:14:53.195 { 00:14:53.195 "name": "BaseBdev1", 00:14:53.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.195 "is_configured": false, 00:14:53.195 "data_offset": 0, 00:14:53.195 "data_size": 0 00:14:53.195 }, 00:14:53.195 { 00:14:53.195 "name": null, 00:14:53.195 "uuid": "73d93fa2-aa59-4d07-8ab7-3972ee98ca8e", 00:14:53.195 "is_configured": false, 00:14:53.195 "data_offset": 0, 00:14:53.195 "data_size": 65536 00:14:53.195 }, 00:14:53.195 { 00:14:53.195 "name": "BaseBdev3", 00:14:53.196 "uuid": "c0335b7e-a241-48a3-bd8a-7194ae053c5b", 00:14:53.196 "is_configured": true, 00:14:53.196 "data_offset": 0, 00:14:53.196 "data_size": 65536 00:14:53.196 }, 00:14:53.196 { 00:14:53.196 "name": "BaseBdev4", 00:14:53.196 "uuid": "7e151a3e-79e8-4a46-b3fe-8df29e693c12", 00:14:53.196 "is_configured": true, 00:14:53.196 "data_offset": 0, 00:14:53.196 "data_size": 65536 00:14:53.196 } 00:14:53.196 ] 00:14:53.196 }' 00:14:53.196 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.196 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.763 [2024-12-07 02:48:04.655027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.763 BaseBdev1 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.763 [ 00:14:53.763 { 00:14:53.763 "name": "BaseBdev1", 00:14:53.763 "aliases": [ 00:14:53.763 "ddfaee9c-9c4f-4183-bd7b-715c0f5c4a00" 00:14:53.763 ], 00:14:53.763 "product_name": "Malloc disk", 00:14:53.763 "block_size": 512, 00:14:53.763 "num_blocks": 65536, 00:14:53.763 "uuid": "ddfaee9c-9c4f-4183-bd7b-715c0f5c4a00", 00:14:53.763 "assigned_rate_limits": { 00:14:53.763 "rw_ios_per_sec": 0, 00:14:53.763 "rw_mbytes_per_sec": 0, 00:14:53.763 "r_mbytes_per_sec": 0, 00:14:53.763 "w_mbytes_per_sec": 0 00:14:53.763 }, 00:14:53.763 "claimed": true, 00:14:53.763 "claim_type": "exclusive_write", 00:14:53.763 "zoned": false, 00:14:53.763 "supported_io_types": { 00:14:53.763 "read": true, 00:14:53.763 "write": true, 00:14:53.763 "unmap": true, 00:14:53.763 "flush": true, 00:14:53.763 "reset": true, 00:14:53.763 "nvme_admin": false, 00:14:53.763 "nvme_io": false, 00:14:53.763 "nvme_io_md": false, 00:14:53.763 "write_zeroes": true, 00:14:53.763 "zcopy": true, 00:14:53.763 "get_zone_info": false, 00:14:53.763 "zone_management": false, 00:14:53.763 "zone_append": false, 00:14:53.763 "compare": false, 00:14:53.763 "compare_and_write": false, 00:14:53.763 "abort": true, 00:14:53.763 "seek_hole": false, 00:14:53.763 "seek_data": false, 00:14:53.763 "copy": true, 00:14:53.763 "nvme_iov_md": false 00:14:53.763 }, 00:14:53.763 "memory_domains": [ 00:14:53.763 { 00:14:53.763 "dma_device_id": "system", 00:14:53.763 "dma_device_type": 1 00:14:53.763 }, 00:14:53.763 { 00:14:53.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.763 "dma_device_type": 2 00:14:53.763 } 00:14:53.763 ], 00:14:53.763 "driver_specific": {} 00:14:53.763 } 00:14:53.763 ] 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.763 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.763 "name": "Existed_Raid", 00:14:53.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.763 "strip_size_kb": 64, 00:14:53.763 "state": "configuring", 00:14:53.763 "raid_level": "raid5f", 00:14:53.763 "superblock": false, 00:14:53.763 "num_base_bdevs": 4, 00:14:53.763 "num_base_bdevs_discovered": 3, 00:14:53.763 "num_base_bdevs_operational": 4, 00:14:53.763 "base_bdevs_list": [ 00:14:53.763 { 00:14:53.763 "name": "BaseBdev1", 00:14:53.763 "uuid": "ddfaee9c-9c4f-4183-bd7b-715c0f5c4a00", 00:14:53.763 "is_configured": true, 00:14:53.763 "data_offset": 0, 00:14:53.763 "data_size": 65536 00:14:53.763 }, 00:14:53.763 { 00:14:53.763 "name": null, 00:14:53.763 "uuid": "73d93fa2-aa59-4d07-8ab7-3972ee98ca8e", 00:14:53.763 "is_configured": false, 00:14:53.763 "data_offset": 0, 00:14:53.763 "data_size": 65536 00:14:53.763 }, 00:14:53.763 { 00:14:53.763 "name": "BaseBdev3", 00:14:53.763 "uuid": "c0335b7e-a241-48a3-bd8a-7194ae053c5b", 00:14:53.763 "is_configured": true, 00:14:53.763 "data_offset": 0, 00:14:53.763 "data_size": 65536 00:14:53.763 }, 00:14:53.763 { 00:14:53.763 "name": "BaseBdev4", 00:14:53.763 "uuid": "7e151a3e-79e8-4a46-b3fe-8df29e693c12", 00:14:53.763 "is_configured": true, 00:14:53.764 "data_offset": 0, 00:14:53.764 "data_size": 65536 00:14:53.764 } 00:14:53.764 ] 00:14:53.764 }' 00:14:53.764 02:48:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.764 02:48:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.331 [2024-12-07 02:48:05.154259] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.331 "name": "Existed_Raid", 00:14:54.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.331 "strip_size_kb": 64, 00:14:54.331 "state": "configuring", 00:14:54.331 "raid_level": "raid5f", 00:14:54.331 "superblock": false, 00:14:54.331 "num_base_bdevs": 4, 00:14:54.331 "num_base_bdevs_discovered": 2, 00:14:54.331 "num_base_bdevs_operational": 4, 00:14:54.331 "base_bdevs_list": [ 00:14:54.331 { 00:14:54.331 "name": "BaseBdev1", 00:14:54.331 "uuid": "ddfaee9c-9c4f-4183-bd7b-715c0f5c4a00", 00:14:54.331 "is_configured": true, 00:14:54.331 "data_offset": 0, 00:14:54.331 "data_size": 65536 00:14:54.331 }, 00:14:54.331 { 00:14:54.331 "name": null, 00:14:54.331 "uuid": "73d93fa2-aa59-4d07-8ab7-3972ee98ca8e", 00:14:54.331 "is_configured": false, 00:14:54.331 "data_offset": 0, 00:14:54.331 "data_size": 65536 00:14:54.331 }, 00:14:54.331 { 00:14:54.331 "name": null, 00:14:54.331 "uuid": "c0335b7e-a241-48a3-bd8a-7194ae053c5b", 00:14:54.331 "is_configured": false, 00:14:54.331 "data_offset": 0, 00:14:54.331 "data_size": 65536 00:14:54.331 }, 00:14:54.331 { 00:14:54.331 "name": "BaseBdev4", 00:14:54.331 "uuid": "7e151a3e-79e8-4a46-b3fe-8df29e693c12", 00:14:54.331 "is_configured": true, 00:14:54.331 "data_offset": 0, 00:14:54.331 "data_size": 65536 00:14:54.331 } 00:14:54.331 ] 00:14:54.331 }' 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.331 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.590 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.590 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.590 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.590 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:54.590 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.590 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:54.590 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:54.590 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.590 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.850 [2024-12-07 02:48:05.669410] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.850 "name": "Existed_Raid", 00:14:54.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.850 "strip_size_kb": 64, 00:14:54.850 "state": "configuring", 00:14:54.850 "raid_level": "raid5f", 00:14:54.850 "superblock": false, 00:14:54.850 "num_base_bdevs": 4, 00:14:54.850 "num_base_bdevs_discovered": 3, 00:14:54.850 "num_base_bdevs_operational": 4, 00:14:54.850 "base_bdevs_list": [ 00:14:54.850 { 00:14:54.850 "name": "BaseBdev1", 00:14:54.850 "uuid": "ddfaee9c-9c4f-4183-bd7b-715c0f5c4a00", 00:14:54.850 "is_configured": true, 00:14:54.850 "data_offset": 0, 00:14:54.850 "data_size": 65536 00:14:54.850 }, 00:14:54.850 { 00:14:54.850 "name": null, 00:14:54.850 "uuid": "73d93fa2-aa59-4d07-8ab7-3972ee98ca8e", 00:14:54.850 "is_configured": false, 00:14:54.850 "data_offset": 0, 00:14:54.850 "data_size": 65536 00:14:54.850 }, 00:14:54.850 { 00:14:54.850 "name": "BaseBdev3", 00:14:54.850 "uuid": "c0335b7e-a241-48a3-bd8a-7194ae053c5b", 00:14:54.850 "is_configured": true, 00:14:54.850 "data_offset": 0, 00:14:54.850 "data_size": 65536 00:14:54.850 }, 00:14:54.850 { 00:14:54.850 "name": "BaseBdev4", 00:14:54.850 "uuid": "7e151a3e-79e8-4a46-b3fe-8df29e693c12", 00:14:54.850 "is_configured": true, 00:14:54.850 "data_offset": 0, 00:14:54.850 "data_size": 65536 00:14:54.850 } 00:14:54.850 ] 00:14:54.850 }' 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.850 02:48:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.108 [2024-12-07 02:48:06.152605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.108 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.367 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.367 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.367 "name": "Existed_Raid", 00:14:55.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.367 "strip_size_kb": 64, 00:14:55.367 "state": "configuring", 00:14:55.367 "raid_level": "raid5f", 00:14:55.367 "superblock": false, 00:14:55.367 "num_base_bdevs": 4, 00:14:55.367 "num_base_bdevs_discovered": 2, 00:14:55.367 "num_base_bdevs_operational": 4, 00:14:55.367 "base_bdevs_list": [ 00:14:55.367 { 00:14:55.367 "name": null, 00:14:55.367 "uuid": "ddfaee9c-9c4f-4183-bd7b-715c0f5c4a00", 00:14:55.367 "is_configured": false, 00:14:55.367 "data_offset": 0, 00:14:55.367 "data_size": 65536 00:14:55.367 }, 00:14:55.367 { 00:14:55.367 "name": null, 00:14:55.367 "uuid": "73d93fa2-aa59-4d07-8ab7-3972ee98ca8e", 00:14:55.367 "is_configured": false, 00:14:55.367 "data_offset": 0, 00:14:55.367 "data_size": 65536 00:14:55.367 }, 00:14:55.367 { 00:14:55.367 "name": "BaseBdev3", 00:14:55.367 "uuid": "c0335b7e-a241-48a3-bd8a-7194ae053c5b", 00:14:55.367 "is_configured": true, 00:14:55.367 "data_offset": 0, 00:14:55.367 "data_size": 65536 00:14:55.367 }, 00:14:55.367 { 00:14:55.367 "name": "BaseBdev4", 00:14:55.367 "uuid": "7e151a3e-79e8-4a46-b3fe-8df29e693c12", 00:14:55.367 "is_configured": true, 00:14:55.367 "data_offset": 0, 00:14:55.367 "data_size": 65536 00:14:55.367 } 00:14:55.367 ] 00:14:55.367 }' 00:14:55.367 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.367 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.625 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.625 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.625 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.625 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:55.625 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.892 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:55.892 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:55.892 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.892 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.892 [2024-12-07 02:48:06.710061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.892 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.892 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:55.892 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.892 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.892 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.893 "name": "Existed_Raid", 00:14:55.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.893 "strip_size_kb": 64, 00:14:55.893 "state": "configuring", 00:14:55.893 "raid_level": "raid5f", 00:14:55.893 "superblock": false, 00:14:55.893 "num_base_bdevs": 4, 00:14:55.893 "num_base_bdevs_discovered": 3, 00:14:55.893 "num_base_bdevs_operational": 4, 00:14:55.893 "base_bdevs_list": [ 00:14:55.893 { 00:14:55.893 "name": null, 00:14:55.893 "uuid": "ddfaee9c-9c4f-4183-bd7b-715c0f5c4a00", 00:14:55.893 "is_configured": false, 00:14:55.893 "data_offset": 0, 00:14:55.893 "data_size": 65536 00:14:55.893 }, 00:14:55.893 { 00:14:55.893 "name": "BaseBdev2", 00:14:55.893 "uuid": "73d93fa2-aa59-4d07-8ab7-3972ee98ca8e", 00:14:55.893 "is_configured": true, 00:14:55.893 "data_offset": 0, 00:14:55.893 "data_size": 65536 00:14:55.893 }, 00:14:55.893 { 00:14:55.893 "name": "BaseBdev3", 00:14:55.893 "uuid": "c0335b7e-a241-48a3-bd8a-7194ae053c5b", 00:14:55.893 "is_configured": true, 00:14:55.893 "data_offset": 0, 00:14:55.893 "data_size": 65536 00:14:55.893 }, 00:14:55.893 { 00:14:55.893 "name": "BaseBdev4", 00:14:55.893 "uuid": "7e151a3e-79e8-4a46-b3fe-8df29e693c12", 00:14:55.893 "is_configured": true, 00:14:55.893 "data_offset": 0, 00:14:55.893 "data_size": 65536 00:14:55.893 } 00:14:55.893 ] 00:14:55.893 }' 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.893 02:48:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ddfaee9c-9c4f-4183-bd7b-715c0f5c4a00 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.186 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.463 [2024-12-07 02:48:07.251714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:56.463 [2024-12-07 02:48:07.251815] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:14:56.463 [2024-12-07 02:48:07.251839] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:56.463 [2024-12-07 02:48:07.252134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:56.463 [2024-12-07 02:48:07.252610] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:14:56.463 [2024-12-07 02:48:07.252661] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:14:56.463 [2024-12-07 02:48:07.252872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.463 NewBaseBdev 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.463 [ 00:14:56.463 { 00:14:56.463 "name": "NewBaseBdev", 00:14:56.463 "aliases": [ 00:14:56.463 "ddfaee9c-9c4f-4183-bd7b-715c0f5c4a00" 00:14:56.463 ], 00:14:56.463 "product_name": "Malloc disk", 00:14:56.463 "block_size": 512, 00:14:56.463 "num_blocks": 65536, 00:14:56.463 "uuid": "ddfaee9c-9c4f-4183-bd7b-715c0f5c4a00", 00:14:56.463 "assigned_rate_limits": { 00:14:56.463 "rw_ios_per_sec": 0, 00:14:56.463 "rw_mbytes_per_sec": 0, 00:14:56.463 "r_mbytes_per_sec": 0, 00:14:56.463 "w_mbytes_per_sec": 0 00:14:56.463 }, 00:14:56.463 "claimed": true, 00:14:56.463 "claim_type": "exclusive_write", 00:14:56.463 "zoned": false, 00:14:56.463 "supported_io_types": { 00:14:56.463 "read": true, 00:14:56.463 "write": true, 00:14:56.463 "unmap": true, 00:14:56.463 "flush": true, 00:14:56.463 "reset": true, 00:14:56.463 "nvme_admin": false, 00:14:56.463 "nvme_io": false, 00:14:56.463 "nvme_io_md": false, 00:14:56.463 "write_zeroes": true, 00:14:56.463 "zcopy": true, 00:14:56.463 "get_zone_info": false, 00:14:56.463 "zone_management": false, 00:14:56.463 "zone_append": false, 00:14:56.463 "compare": false, 00:14:56.463 "compare_and_write": false, 00:14:56.463 "abort": true, 00:14:56.463 "seek_hole": false, 00:14:56.463 "seek_data": false, 00:14:56.463 "copy": true, 00:14:56.463 "nvme_iov_md": false 00:14:56.463 }, 00:14:56.463 "memory_domains": [ 00:14:56.463 { 00:14:56.463 "dma_device_id": "system", 00:14:56.463 "dma_device_type": 1 00:14:56.463 }, 00:14:56.463 { 00:14:56.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.463 "dma_device_type": 2 00:14:56.463 } 00:14:56.463 ], 00:14:56.463 "driver_specific": {} 00:14:56.463 } 00:14:56.463 ] 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.463 "name": "Existed_Raid", 00:14:56.463 "uuid": "a5abd95b-a1e2-4041-8693-d5df6a792f06", 00:14:56.463 "strip_size_kb": 64, 00:14:56.463 "state": "online", 00:14:56.463 "raid_level": "raid5f", 00:14:56.463 "superblock": false, 00:14:56.463 "num_base_bdevs": 4, 00:14:56.463 "num_base_bdevs_discovered": 4, 00:14:56.463 "num_base_bdevs_operational": 4, 00:14:56.463 "base_bdevs_list": [ 00:14:56.463 { 00:14:56.463 "name": "NewBaseBdev", 00:14:56.463 "uuid": "ddfaee9c-9c4f-4183-bd7b-715c0f5c4a00", 00:14:56.463 "is_configured": true, 00:14:56.463 "data_offset": 0, 00:14:56.463 "data_size": 65536 00:14:56.463 }, 00:14:56.463 { 00:14:56.463 "name": "BaseBdev2", 00:14:56.463 "uuid": "73d93fa2-aa59-4d07-8ab7-3972ee98ca8e", 00:14:56.463 "is_configured": true, 00:14:56.463 "data_offset": 0, 00:14:56.463 "data_size": 65536 00:14:56.463 }, 00:14:56.463 { 00:14:56.463 "name": "BaseBdev3", 00:14:56.463 "uuid": "c0335b7e-a241-48a3-bd8a-7194ae053c5b", 00:14:56.463 "is_configured": true, 00:14:56.463 "data_offset": 0, 00:14:56.463 "data_size": 65536 00:14:56.463 }, 00:14:56.463 { 00:14:56.463 "name": "BaseBdev4", 00:14:56.463 "uuid": "7e151a3e-79e8-4a46-b3fe-8df29e693c12", 00:14:56.463 "is_configured": true, 00:14:56.463 "data_offset": 0, 00:14:56.463 "data_size": 65536 00:14:56.463 } 00:14:56.463 ] 00:14:56.463 }' 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.463 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.723 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:56.723 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:56.723 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:56.723 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:56.723 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:56.723 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:56.723 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:56.723 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:56.723 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.723 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.723 [2024-12-07 02:48:07.751065] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:56.723 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.723 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:56.723 "name": "Existed_Raid", 00:14:56.723 "aliases": [ 00:14:56.723 "a5abd95b-a1e2-4041-8693-d5df6a792f06" 00:14:56.723 ], 00:14:56.723 "product_name": "Raid Volume", 00:14:56.723 "block_size": 512, 00:14:56.723 "num_blocks": 196608, 00:14:56.723 "uuid": "a5abd95b-a1e2-4041-8693-d5df6a792f06", 00:14:56.723 "assigned_rate_limits": { 00:14:56.723 "rw_ios_per_sec": 0, 00:14:56.723 "rw_mbytes_per_sec": 0, 00:14:56.723 "r_mbytes_per_sec": 0, 00:14:56.723 "w_mbytes_per_sec": 0 00:14:56.723 }, 00:14:56.723 "claimed": false, 00:14:56.723 "zoned": false, 00:14:56.723 "supported_io_types": { 00:14:56.723 "read": true, 00:14:56.723 "write": true, 00:14:56.723 "unmap": false, 00:14:56.723 "flush": false, 00:14:56.723 "reset": true, 00:14:56.723 "nvme_admin": false, 00:14:56.723 "nvme_io": false, 00:14:56.723 "nvme_io_md": false, 00:14:56.723 "write_zeroes": true, 00:14:56.723 "zcopy": false, 00:14:56.723 "get_zone_info": false, 00:14:56.723 "zone_management": false, 00:14:56.723 "zone_append": false, 00:14:56.723 "compare": false, 00:14:56.723 "compare_and_write": false, 00:14:56.723 "abort": false, 00:14:56.723 "seek_hole": false, 00:14:56.723 "seek_data": false, 00:14:56.723 "copy": false, 00:14:56.723 "nvme_iov_md": false 00:14:56.723 }, 00:14:56.723 "driver_specific": { 00:14:56.723 "raid": { 00:14:56.723 "uuid": "a5abd95b-a1e2-4041-8693-d5df6a792f06", 00:14:56.723 "strip_size_kb": 64, 00:14:56.723 "state": "online", 00:14:56.723 "raid_level": "raid5f", 00:14:56.723 "superblock": false, 00:14:56.723 "num_base_bdevs": 4, 00:14:56.723 "num_base_bdevs_discovered": 4, 00:14:56.723 "num_base_bdevs_operational": 4, 00:14:56.723 "base_bdevs_list": [ 00:14:56.723 { 00:14:56.723 "name": "NewBaseBdev", 00:14:56.723 "uuid": "ddfaee9c-9c4f-4183-bd7b-715c0f5c4a00", 00:14:56.723 "is_configured": true, 00:14:56.723 "data_offset": 0, 00:14:56.723 "data_size": 65536 00:14:56.723 }, 00:14:56.723 { 00:14:56.723 "name": "BaseBdev2", 00:14:56.723 "uuid": "73d93fa2-aa59-4d07-8ab7-3972ee98ca8e", 00:14:56.723 "is_configured": true, 00:14:56.723 "data_offset": 0, 00:14:56.723 "data_size": 65536 00:14:56.723 }, 00:14:56.723 { 00:14:56.723 "name": "BaseBdev3", 00:14:56.723 "uuid": "c0335b7e-a241-48a3-bd8a-7194ae053c5b", 00:14:56.723 "is_configured": true, 00:14:56.723 "data_offset": 0, 00:14:56.723 "data_size": 65536 00:14:56.723 }, 00:14:56.723 { 00:14:56.723 "name": "BaseBdev4", 00:14:56.723 "uuid": "7e151a3e-79e8-4a46-b3fe-8df29e693c12", 00:14:56.723 "is_configured": true, 00:14:56.723 "data_offset": 0, 00:14:56.723 "data_size": 65536 00:14:56.723 } 00:14:56.723 ] 00:14:56.723 } 00:14:56.723 } 00:14:56.723 }' 00:14:56.723 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:56.983 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:56.983 BaseBdev2 00:14:56.983 BaseBdev3 00:14:56.983 BaseBdev4' 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.984 02:48:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.984 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.984 02:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.984 02:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.984 02:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.984 02:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:56.984 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.984 02:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.984 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.984 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.244 [2024-12-07 02:48:08.086312] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:57.244 [2024-12-07 02:48:08.086337] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.244 [2024-12-07 02:48:08.086398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.244 [2024-12-07 02:48:08.086649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.244 [2024-12-07 02:48:08.086659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 93516 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 93516 ']' 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 93516 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93516 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:57.244 killing process with pid 93516 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93516' 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 93516 00:14:57.244 [2024-12-07 02:48:08.135060] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.244 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 93516 00:14:57.244 [2024-12-07 02:48:08.176329] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.504 ************************************ 00:14:57.504 END TEST raid5f_state_function_test 00:14:57.504 ************************************ 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:57.504 00:14:57.504 real 0m9.740s 00:14:57.504 user 0m16.580s 00:14:57.504 sys 0m2.163s 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.504 02:48:08 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:14:57.504 02:48:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:57.504 02:48:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:57.504 02:48:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.504 ************************************ 00:14:57.504 START TEST raid5f_state_function_test_sb 00:14:57.504 ************************************ 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:57.504 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:57.505 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=94164 00:14:57.505 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:57.505 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94164' 00:14:57.505 Process raid pid: 94164 00:14:57.505 02:48:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 94164 00:14:57.505 02:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 94164 ']' 00:14:57.505 02:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.505 02:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.505 02:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.505 02:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.505 02:48:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.764 [2024-12-07 02:48:08.608648] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:14:57.764 [2024-12-07 02:48:08.608848] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.764 [2024-12-07 02:48:08.778321] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.764 [2024-12-07 02:48:08.824948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.024 [2024-12-07 02:48:08.867067] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.024 [2024-12-07 02:48:08.867174] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.594 [2024-12-07 02:48:09.416552] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.594 [2024-12-07 02:48:09.416607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.594 [2024-12-07 02:48:09.416619] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:58.594 [2024-12-07 02:48:09.416629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:58.594 [2024-12-07 02:48:09.416635] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:58.594 [2024-12-07 02:48:09.416647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:58.594 [2024-12-07 02:48:09.416653] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:58.594 [2024-12-07 02:48:09.416662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.594 "name": "Existed_Raid", 00:14:58.594 "uuid": "47db9dc5-7544-48a7-918f-1cd16d462d2d", 00:14:58.594 "strip_size_kb": 64, 00:14:58.594 "state": "configuring", 00:14:58.594 "raid_level": "raid5f", 00:14:58.594 "superblock": true, 00:14:58.594 "num_base_bdevs": 4, 00:14:58.594 "num_base_bdevs_discovered": 0, 00:14:58.594 "num_base_bdevs_operational": 4, 00:14:58.594 "base_bdevs_list": [ 00:14:58.594 { 00:14:58.594 "name": "BaseBdev1", 00:14:58.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.594 "is_configured": false, 00:14:58.594 "data_offset": 0, 00:14:58.594 "data_size": 0 00:14:58.594 }, 00:14:58.594 { 00:14:58.594 "name": "BaseBdev2", 00:14:58.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.594 "is_configured": false, 00:14:58.594 "data_offset": 0, 00:14:58.594 "data_size": 0 00:14:58.594 }, 00:14:58.594 { 00:14:58.594 "name": "BaseBdev3", 00:14:58.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.594 "is_configured": false, 00:14:58.594 "data_offset": 0, 00:14:58.594 "data_size": 0 00:14:58.594 }, 00:14:58.594 { 00:14:58.594 "name": "BaseBdev4", 00:14:58.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.594 "is_configured": false, 00:14:58.594 "data_offset": 0, 00:14:58.594 "data_size": 0 00:14:58.594 } 00:14:58.594 ] 00:14:58.594 }' 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.594 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.854 [2024-12-07 02:48:09.843806] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.854 [2024-12-07 02:48:09.843889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.854 [2024-12-07 02:48:09.855827] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.854 [2024-12-07 02:48:09.855894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.854 [2024-12-07 02:48:09.855918] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:58.854 [2024-12-07 02:48:09.855947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:58.854 [2024-12-07 02:48:09.855964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:58.854 [2024-12-07 02:48:09.855983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:58.854 [2024-12-07 02:48:09.855998] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:58.854 [2024-12-07 02:48:09.856017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.854 [2024-12-07 02:48:09.876553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.854 BaseBdev1 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.854 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.855 [ 00:14:58.855 { 00:14:58.855 "name": "BaseBdev1", 00:14:58.855 "aliases": [ 00:14:58.855 "67b2d768-ceda-48ac-a7cb-2d5e3cc06e09" 00:14:58.855 ], 00:14:58.855 "product_name": "Malloc disk", 00:14:58.855 "block_size": 512, 00:14:58.855 "num_blocks": 65536, 00:14:58.855 "uuid": "67b2d768-ceda-48ac-a7cb-2d5e3cc06e09", 00:14:58.855 "assigned_rate_limits": { 00:14:58.855 "rw_ios_per_sec": 0, 00:14:58.855 "rw_mbytes_per_sec": 0, 00:14:58.855 "r_mbytes_per_sec": 0, 00:14:58.855 "w_mbytes_per_sec": 0 00:14:58.855 }, 00:14:58.855 "claimed": true, 00:14:58.855 "claim_type": "exclusive_write", 00:14:58.855 "zoned": false, 00:14:58.855 "supported_io_types": { 00:14:58.855 "read": true, 00:14:58.855 "write": true, 00:14:58.855 "unmap": true, 00:14:58.855 "flush": true, 00:14:58.855 "reset": true, 00:14:58.855 "nvme_admin": false, 00:14:58.855 "nvme_io": false, 00:14:58.855 "nvme_io_md": false, 00:14:58.855 "write_zeroes": true, 00:14:58.855 "zcopy": true, 00:14:58.855 "get_zone_info": false, 00:14:58.855 "zone_management": false, 00:14:58.855 "zone_append": false, 00:14:58.855 "compare": false, 00:14:58.855 "compare_and_write": false, 00:14:58.855 "abort": true, 00:14:58.855 "seek_hole": false, 00:14:58.855 "seek_data": false, 00:14:58.855 "copy": true, 00:14:58.855 "nvme_iov_md": false 00:14:58.855 }, 00:14:58.855 "memory_domains": [ 00:14:58.855 { 00:14:58.855 "dma_device_id": "system", 00:14:58.855 "dma_device_type": 1 00:14:58.855 }, 00:14:58.855 { 00:14:58.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.855 "dma_device_type": 2 00:14:58.855 } 00:14:58.855 ], 00:14:58.855 "driver_specific": {} 00:14:58.855 } 00:14:58.855 ] 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.855 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.115 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.115 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.115 "name": "Existed_Raid", 00:14:59.115 "uuid": "466a66eb-ce14-4a5d-b4e2-d966ba2e03ea", 00:14:59.115 "strip_size_kb": 64, 00:14:59.115 "state": "configuring", 00:14:59.115 "raid_level": "raid5f", 00:14:59.115 "superblock": true, 00:14:59.115 "num_base_bdevs": 4, 00:14:59.115 "num_base_bdevs_discovered": 1, 00:14:59.115 "num_base_bdevs_operational": 4, 00:14:59.115 "base_bdevs_list": [ 00:14:59.115 { 00:14:59.115 "name": "BaseBdev1", 00:14:59.115 "uuid": "67b2d768-ceda-48ac-a7cb-2d5e3cc06e09", 00:14:59.115 "is_configured": true, 00:14:59.115 "data_offset": 2048, 00:14:59.115 "data_size": 63488 00:14:59.115 }, 00:14:59.115 { 00:14:59.115 "name": "BaseBdev2", 00:14:59.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.115 "is_configured": false, 00:14:59.115 "data_offset": 0, 00:14:59.115 "data_size": 0 00:14:59.115 }, 00:14:59.115 { 00:14:59.115 "name": "BaseBdev3", 00:14:59.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.115 "is_configured": false, 00:14:59.115 "data_offset": 0, 00:14:59.115 "data_size": 0 00:14:59.115 }, 00:14:59.115 { 00:14:59.115 "name": "BaseBdev4", 00:14:59.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.115 "is_configured": false, 00:14:59.115 "data_offset": 0, 00:14:59.115 "data_size": 0 00:14:59.115 } 00:14:59.115 ] 00:14:59.115 }' 00:14:59.115 02:48:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.116 02:48:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.376 [2024-12-07 02:48:10.363828] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.376 [2024-12-07 02:48:10.363864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.376 [2024-12-07 02:48:10.375866] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.376 [2024-12-07 02:48:10.377653] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.376 [2024-12-07 02:48:10.377687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.376 [2024-12-07 02:48:10.377695] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:59.376 [2024-12-07 02:48:10.377703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:59.376 [2024-12-07 02:48:10.377709] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:59.376 [2024-12-07 02:48:10.377717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.376 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.376 "name": "Existed_Raid", 00:14:59.376 "uuid": "42f7dd21-a724-4620-a7aa-4413d14d865e", 00:14:59.376 "strip_size_kb": 64, 00:14:59.376 "state": "configuring", 00:14:59.376 "raid_level": "raid5f", 00:14:59.376 "superblock": true, 00:14:59.376 "num_base_bdevs": 4, 00:14:59.376 "num_base_bdevs_discovered": 1, 00:14:59.376 "num_base_bdevs_operational": 4, 00:14:59.376 "base_bdevs_list": [ 00:14:59.376 { 00:14:59.376 "name": "BaseBdev1", 00:14:59.376 "uuid": "67b2d768-ceda-48ac-a7cb-2d5e3cc06e09", 00:14:59.376 "is_configured": true, 00:14:59.376 "data_offset": 2048, 00:14:59.376 "data_size": 63488 00:14:59.376 }, 00:14:59.376 { 00:14:59.377 "name": "BaseBdev2", 00:14:59.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.377 "is_configured": false, 00:14:59.377 "data_offset": 0, 00:14:59.377 "data_size": 0 00:14:59.377 }, 00:14:59.377 { 00:14:59.377 "name": "BaseBdev3", 00:14:59.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.377 "is_configured": false, 00:14:59.377 "data_offset": 0, 00:14:59.377 "data_size": 0 00:14:59.377 }, 00:14:59.377 { 00:14:59.377 "name": "BaseBdev4", 00:14:59.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.377 "is_configured": false, 00:14:59.377 "data_offset": 0, 00:14:59.377 "data_size": 0 00:14:59.377 } 00:14:59.377 ] 00:14:59.377 }' 00:14:59.377 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.377 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.948 [2024-12-07 02:48:10.842237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.948 BaseBdev2 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.948 [ 00:14:59.948 { 00:14:59.948 "name": "BaseBdev2", 00:14:59.948 "aliases": [ 00:14:59.948 "c49ca3ae-4f08-4c7b-9961-34ea4091963c" 00:14:59.948 ], 00:14:59.948 "product_name": "Malloc disk", 00:14:59.948 "block_size": 512, 00:14:59.948 "num_blocks": 65536, 00:14:59.948 "uuid": "c49ca3ae-4f08-4c7b-9961-34ea4091963c", 00:14:59.948 "assigned_rate_limits": { 00:14:59.948 "rw_ios_per_sec": 0, 00:14:59.948 "rw_mbytes_per_sec": 0, 00:14:59.948 "r_mbytes_per_sec": 0, 00:14:59.948 "w_mbytes_per_sec": 0 00:14:59.948 }, 00:14:59.948 "claimed": true, 00:14:59.948 "claim_type": "exclusive_write", 00:14:59.948 "zoned": false, 00:14:59.948 "supported_io_types": { 00:14:59.948 "read": true, 00:14:59.948 "write": true, 00:14:59.948 "unmap": true, 00:14:59.948 "flush": true, 00:14:59.948 "reset": true, 00:14:59.948 "nvme_admin": false, 00:14:59.948 "nvme_io": false, 00:14:59.948 "nvme_io_md": false, 00:14:59.948 "write_zeroes": true, 00:14:59.948 "zcopy": true, 00:14:59.948 "get_zone_info": false, 00:14:59.948 "zone_management": false, 00:14:59.948 "zone_append": false, 00:14:59.948 "compare": false, 00:14:59.948 "compare_and_write": false, 00:14:59.948 "abort": true, 00:14:59.948 "seek_hole": false, 00:14:59.948 "seek_data": false, 00:14:59.948 "copy": true, 00:14:59.948 "nvme_iov_md": false 00:14:59.948 }, 00:14:59.948 "memory_domains": [ 00:14:59.948 { 00:14:59.948 "dma_device_id": "system", 00:14:59.948 "dma_device_type": 1 00:14:59.948 }, 00:14:59.948 { 00:14:59.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.948 "dma_device_type": 2 00:14:59.948 } 00:14:59.948 ], 00:14:59.948 "driver_specific": {} 00:14:59.948 } 00:14:59.948 ] 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.948 "name": "Existed_Raid", 00:14:59.948 "uuid": "42f7dd21-a724-4620-a7aa-4413d14d865e", 00:14:59.948 "strip_size_kb": 64, 00:14:59.948 "state": "configuring", 00:14:59.948 "raid_level": "raid5f", 00:14:59.948 "superblock": true, 00:14:59.948 "num_base_bdevs": 4, 00:14:59.948 "num_base_bdevs_discovered": 2, 00:14:59.948 "num_base_bdevs_operational": 4, 00:14:59.948 "base_bdevs_list": [ 00:14:59.948 { 00:14:59.948 "name": "BaseBdev1", 00:14:59.948 "uuid": "67b2d768-ceda-48ac-a7cb-2d5e3cc06e09", 00:14:59.948 "is_configured": true, 00:14:59.948 "data_offset": 2048, 00:14:59.948 "data_size": 63488 00:14:59.948 }, 00:14:59.948 { 00:14:59.948 "name": "BaseBdev2", 00:14:59.948 "uuid": "c49ca3ae-4f08-4c7b-9961-34ea4091963c", 00:14:59.948 "is_configured": true, 00:14:59.948 "data_offset": 2048, 00:14:59.948 "data_size": 63488 00:14:59.948 }, 00:14:59.948 { 00:14:59.948 "name": "BaseBdev3", 00:14:59.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.948 "is_configured": false, 00:14:59.948 "data_offset": 0, 00:14:59.948 "data_size": 0 00:14:59.948 }, 00:14:59.948 { 00:14:59.948 "name": "BaseBdev4", 00:14:59.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.948 "is_configured": false, 00:14:59.948 "data_offset": 0, 00:14:59.948 "data_size": 0 00:14:59.948 } 00:14:59.948 ] 00:14:59.948 }' 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.948 02:48:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.519 [2024-12-07 02:48:11.364265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.519 BaseBdev3 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.519 [ 00:15:00.519 { 00:15:00.519 "name": "BaseBdev3", 00:15:00.519 "aliases": [ 00:15:00.519 "e55be0e1-89bd-402a-b6b6-fd8b49d43d0a" 00:15:00.519 ], 00:15:00.519 "product_name": "Malloc disk", 00:15:00.519 "block_size": 512, 00:15:00.519 "num_blocks": 65536, 00:15:00.519 "uuid": "e55be0e1-89bd-402a-b6b6-fd8b49d43d0a", 00:15:00.519 "assigned_rate_limits": { 00:15:00.519 "rw_ios_per_sec": 0, 00:15:00.519 "rw_mbytes_per_sec": 0, 00:15:00.519 "r_mbytes_per_sec": 0, 00:15:00.519 "w_mbytes_per_sec": 0 00:15:00.519 }, 00:15:00.519 "claimed": true, 00:15:00.519 "claim_type": "exclusive_write", 00:15:00.519 "zoned": false, 00:15:00.519 "supported_io_types": { 00:15:00.519 "read": true, 00:15:00.519 "write": true, 00:15:00.519 "unmap": true, 00:15:00.519 "flush": true, 00:15:00.519 "reset": true, 00:15:00.519 "nvme_admin": false, 00:15:00.519 "nvme_io": false, 00:15:00.519 "nvme_io_md": false, 00:15:00.519 "write_zeroes": true, 00:15:00.519 "zcopy": true, 00:15:00.519 "get_zone_info": false, 00:15:00.519 "zone_management": false, 00:15:00.519 "zone_append": false, 00:15:00.519 "compare": false, 00:15:00.519 "compare_and_write": false, 00:15:00.519 "abort": true, 00:15:00.519 "seek_hole": false, 00:15:00.519 "seek_data": false, 00:15:00.519 "copy": true, 00:15:00.519 "nvme_iov_md": false 00:15:00.519 }, 00:15:00.519 "memory_domains": [ 00:15:00.519 { 00:15:00.519 "dma_device_id": "system", 00:15:00.519 "dma_device_type": 1 00:15:00.519 }, 00:15:00.519 { 00:15:00.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.519 "dma_device_type": 2 00:15:00.519 } 00:15:00.519 ], 00:15:00.519 "driver_specific": {} 00:15:00.519 } 00:15:00.519 ] 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.519 "name": "Existed_Raid", 00:15:00.519 "uuid": "42f7dd21-a724-4620-a7aa-4413d14d865e", 00:15:00.519 "strip_size_kb": 64, 00:15:00.519 "state": "configuring", 00:15:00.519 "raid_level": "raid5f", 00:15:00.519 "superblock": true, 00:15:00.519 "num_base_bdevs": 4, 00:15:00.519 "num_base_bdevs_discovered": 3, 00:15:00.519 "num_base_bdevs_operational": 4, 00:15:00.519 "base_bdevs_list": [ 00:15:00.519 { 00:15:00.519 "name": "BaseBdev1", 00:15:00.519 "uuid": "67b2d768-ceda-48ac-a7cb-2d5e3cc06e09", 00:15:00.519 "is_configured": true, 00:15:00.519 "data_offset": 2048, 00:15:00.519 "data_size": 63488 00:15:00.519 }, 00:15:00.519 { 00:15:00.519 "name": "BaseBdev2", 00:15:00.519 "uuid": "c49ca3ae-4f08-4c7b-9961-34ea4091963c", 00:15:00.519 "is_configured": true, 00:15:00.519 "data_offset": 2048, 00:15:00.519 "data_size": 63488 00:15:00.519 }, 00:15:00.519 { 00:15:00.519 "name": "BaseBdev3", 00:15:00.519 "uuid": "e55be0e1-89bd-402a-b6b6-fd8b49d43d0a", 00:15:00.519 "is_configured": true, 00:15:00.519 "data_offset": 2048, 00:15:00.519 "data_size": 63488 00:15:00.519 }, 00:15:00.519 { 00:15:00.519 "name": "BaseBdev4", 00:15:00.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.519 "is_configured": false, 00:15:00.519 "data_offset": 0, 00:15:00.519 "data_size": 0 00:15:00.519 } 00:15:00.519 ] 00:15:00.519 }' 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.519 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.779 [2024-12-07 02:48:11.834484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:00.779 [2024-12-07 02:48:11.834701] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:00.779 [2024-12-07 02:48:11.834717] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:00.779 BaseBdev4 00:15:00.779 [2024-12-07 02:48:11.835007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:00.779 [2024-12-07 02:48:11.835441] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:00.779 [2024-12-07 02:48:11.835455] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.779 [2024-12-07 02:48:11.835576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.779 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:00.780 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.780 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.040 [ 00:15:01.040 { 00:15:01.040 "name": "BaseBdev4", 00:15:01.040 "aliases": [ 00:15:01.040 "33439ee5-1ac8-4a8f-a4cc-04031e60c666" 00:15:01.040 ], 00:15:01.040 "product_name": "Malloc disk", 00:15:01.040 "block_size": 512, 00:15:01.040 "num_blocks": 65536, 00:15:01.040 "uuid": "33439ee5-1ac8-4a8f-a4cc-04031e60c666", 00:15:01.040 "assigned_rate_limits": { 00:15:01.040 "rw_ios_per_sec": 0, 00:15:01.040 "rw_mbytes_per_sec": 0, 00:15:01.040 "r_mbytes_per_sec": 0, 00:15:01.040 "w_mbytes_per_sec": 0 00:15:01.040 }, 00:15:01.040 "claimed": true, 00:15:01.040 "claim_type": "exclusive_write", 00:15:01.040 "zoned": false, 00:15:01.040 "supported_io_types": { 00:15:01.040 "read": true, 00:15:01.040 "write": true, 00:15:01.040 "unmap": true, 00:15:01.040 "flush": true, 00:15:01.040 "reset": true, 00:15:01.040 "nvme_admin": false, 00:15:01.040 "nvme_io": false, 00:15:01.040 "nvme_io_md": false, 00:15:01.040 "write_zeroes": true, 00:15:01.040 "zcopy": true, 00:15:01.040 "get_zone_info": false, 00:15:01.040 "zone_management": false, 00:15:01.040 "zone_append": false, 00:15:01.040 "compare": false, 00:15:01.040 "compare_and_write": false, 00:15:01.040 "abort": true, 00:15:01.040 "seek_hole": false, 00:15:01.040 "seek_data": false, 00:15:01.040 "copy": true, 00:15:01.040 "nvme_iov_md": false 00:15:01.040 }, 00:15:01.040 "memory_domains": [ 00:15:01.040 { 00:15:01.040 "dma_device_id": "system", 00:15:01.040 "dma_device_type": 1 00:15:01.040 }, 00:15:01.040 { 00:15:01.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.040 "dma_device_type": 2 00:15:01.040 } 00:15:01.040 ], 00:15:01.040 "driver_specific": {} 00:15:01.040 } 00:15:01.040 ] 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.040 "name": "Existed_Raid", 00:15:01.040 "uuid": "42f7dd21-a724-4620-a7aa-4413d14d865e", 00:15:01.040 "strip_size_kb": 64, 00:15:01.040 "state": "online", 00:15:01.040 "raid_level": "raid5f", 00:15:01.040 "superblock": true, 00:15:01.040 "num_base_bdevs": 4, 00:15:01.040 "num_base_bdevs_discovered": 4, 00:15:01.040 "num_base_bdevs_operational": 4, 00:15:01.040 "base_bdevs_list": [ 00:15:01.040 { 00:15:01.040 "name": "BaseBdev1", 00:15:01.040 "uuid": "67b2d768-ceda-48ac-a7cb-2d5e3cc06e09", 00:15:01.040 "is_configured": true, 00:15:01.040 "data_offset": 2048, 00:15:01.040 "data_size": 63488 00:15:01.040 }, 00:15:01.040 { 00:15:01.040 "name": "BaseBdev2", 00:15:01.040 "uuid": "c49ca3ae-4f08-4c7b-9961-34ea4091963c", 00:15:01.040 "is_configured": true, 00:15:01.040 "data_offset": 2048, 00:15:01.040 "data_size": 63488 00:15:01.040 }, 00:15:01.040 { 00:15:01.040 "name": "BaseBdev3", 00:15:01.040 "uuid": "e55be0e1-89bd-402a-b6b6-fd8b49d43d0a", 00:15:01.040 "is_configured": true, 00:15:01.040 "data_offset": 2048, 00:15:01.040 "data_size": 63488 00:15:01.040 }, 00:15:01.040 { 00:15:01.040 "name": "BaseBdev4", 00:15:01.040 "uuid": "33439ee5-1ac8-4a8f-a4cc-04031e60c666", 00:15:01.040 "is_configured": true, 00:15:01.040 "data_offset": 2048, 00:15:01.040 "data_size": 63488 00:15:01.040 } 00:15:01.040 ] 00:15:01.040 }' 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.040 02:48:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.300 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:01.300 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:01.300 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:01.300 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:01.300 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:01.300 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:01.300 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:01.300 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.300 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.300 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:01.300 [2024-12-07 02:48:12.337893] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.300 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.560 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:01.560 "name": "Existed_Raid", 00:15:01.560 "aliases": [ 00:15:01.560 "42f7dd21-a724-4620-a7aa-4413d14d865e" 00:15:01.560 ], 00:15:01.560 "product_name": "Raid Volume", 00:15:01.560 "block_size": 512, 00:15:01.560 "num_blocks": 190464, 00:15:01.560 "uuid": "42f7dd21-a724-4620-a7aa-4413d14d865e", 00:15:01.560 "assigned_rate_limits": { 00:15:01.560 "rw_ios_per_sec": 0, 00:15:01.560 "rw_mbytes_per_sec": 0, 00:15:01.560 "r_mbytes_per_sec": 0, 00:15:01.560 "w_mbytes_per_sec": 0 00:15:01.560 }, 00:15:01.560 "claimed": false, 00:15:01.560 "zoned": false, 00:15:01.560 "supported_io_types": { 00:15:01.560 "read": true, 00:15:01.560 "write": true, 00:15:01.560 "unmap": false, 00:15:01.560 "flush": false, 00:15:01.560 "reset": true, 00:15:01.560 "nvme_admin": false, 00:15:01.560 "nvme_io": false, 00:15:01.560 "nvme_io_md": false, 00:15:01.560 "write_zeroes": true, 00:15:01.560 "zcopy": false, 00:15:01.560 "get_zone_info": false, 00:15:01.560 "zone_management": false, 00:15:01.560 "zone_append": false, 00:15:01.560 "compare": false, 00:15:01.560 "compare_and_write": false, 00:15:01.560 "abort": false, 00:15:01.560 "seek_hole": false, 00:15:01.560 "seek_data": false, 00:15:01.560 "copy": false, 00:15:01.560 "nvme_iov_md": false 00:15:01.560 }, 00:15:01.560 "driver_specific": { 00:15:01.560 "raid": { 00:15:01.560 "uuid": "42f7dd21-a724-4620-a7aa-4413d14d865e", 00:15:01.560 "strip_size_kb": 64, 00:15:01.560 "state": "online", 00:15:01.560 "raid_level": "raid5f", 00:15:01.560 "superblock": true, 00:15:01.560 "num_base_bdevs": 4, 00:15:01.560 "num_base_bdevs_discovered": 4, 00:15:01.560 "num_base_bdevs_operational": 4, 00:15:01.560 "base_bdevs_list": [ 00:15:01.560 { 00:15:01.560 "name": "BaseBdev1", 00:15:01.560 "uuid": "67b2d768-ceda-48ac-a7cb-2d5e3cc06e09", 00:15:01.560 "is_configured": true, 00:15:01.560 "data_offset": 2048, 00:15:01.560 "data_size": 63488 00:15:01.560 }, 00:15:01.560 { 00:15:01.560 "name": "BaseBdev2", 00:15:01.560 "uuid": "c49ca3ae-4f08-4c7b-9961-34ea4091963c", 00:15:01.560 "is_configured": true, 00:15:01.560 "data_offset": 2048, 00:15:01.560 "data_size": 63488 00:15:01.560 }, 00:15:01.560 { 00:15:01.560 "name": "BaseBdev3", 00:15:01.560 "uuid": "e55be0e1-89bd-402a-b6b6-fd8b49d43d0a", 00:15:01.560 "is_configured": true, 00:15:01.560 "data_offset": 2048, 00:15:01.560 "data_size": 63488 00:15:01.560 }, 00:15:01.560 { 00:15:01.560 "name": "BaseBdev4", 00:15:01.560 "uuid": "33439ee5-1ac8-4a8f-a4cc-04031e60c666", 00:15:01.560 "is_configured": true, 00:15:01.560 "data_offset": 2048, 00:15:01.560 "data_size": 63488 00:15:01.560 } 00:15:01.561 ] 00:15:01.561 } 00:15:01.561 } 00:15:01.561 }' 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:01.561 BaseBdev2 00:15:01.561 BaseBdev3 00:15:01.561 BaseBdev4' 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.561 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.821 [2024-12-07 02:48:12.661173] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.821 "name": "Existed_Raid", 00:15:01.821 "uuid": "42f7dd21-a724-4620-a7aa-4413d14d865e", 00:15:01.821 "strip_size_kb": 64, 00:15:01.821 "state": "online", 00:15:01.821 "raid_level": "raid5f", 00:15:01.821 "superblock": true, 00:15:01.821 "num_base_bdevs": 4, 00:15:01.821 "num_base_bdevs_discovered": 3, 00:15:01.821 "num_base_bdevs_operational": 3, 00:15:01.821 "base_bdevs_list": [ 00:15:01.821 { 00:15:01.821 "name": null, 00:15:01.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.821 "is_configured": false, 00:15:01.821 "data_offset": 0, 00:15:01.821 "data_size": 63488 00:15:01.821 }, 00:15:01.821 { 00:15:01.821 "name": "BaseBdev2", 00:15:01.821 "uuid": "c49ca3ae-4f08-4c7b-9961-34ea4091963c", 00:15:01.821 "is_configured": true, 00:15:01.821 "data_offset": 2048, 00:15:01.821 "data_size": 63488 00:15:01.821 }, 00:15:01.821 { 00:15:01.821 "name": "BaseBdev3", 00:15:01.821 "uuid": "e55be0e1-89bd-402a-b6b6-fd8b49d43d0a", 00:15:01.821 "is_configured": true, 00:15:01.821 "data_offset": 2048, 00:15:01.821 "data_size": 63488 00:15:01.821 }, 00:15:01.821 { 00:15:01.821 "name": "BaseBdev4", 00:15:01.821 "uuid": "33439ee5-1ac8-4a8f-a4cc-04031e60c666", 00:15:01.821 "is_configured": true, 00:15:01.821 "data_offset": 2048, 00:15:01.821 "data_size": 63488 00:15:01.821 } 00:15:01.821 ] 00:15:01.821 }' 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.821 02:48:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.081 [2024-12-07 02:48:13.139986] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:02.081 [2024-12-07 02:48:13.140129] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.081 [2024-12-07 02:48:13.151403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:02.081 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.341 [2024-12-07 02:48:13.207323] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:02.341 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.342 [2024-12-07 02:48:13.278421] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:02.342 [2024-12-07 02:48:13.278466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.342 BaseBdev2 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.342 [ 00:15:02.342 { 00:15:02.342 "name": "BaseBdev2", 00:15:02.342 "aliases": [ 00:15:02.342 "17321eda-c608-412a-91a8-70a54e21f8e7" 00:15:02.342 ], 00:15:02.342 "product_name": "Malloc disk", 00:15:02.342 "block_size": 512, 00:15:02.342 "num_blocks": 65536, 00:15:02.342 "uuid": "17321eda-c608-412a-91a8-70a54e21f8e7", 00:15:02.342 "assigned_rate_limits": { 00:15:02.342 "rw_ios_per_sec": 0, 00:15:02.342 "rw_mbytes_per_sec": 0, 00:15:02.342 "r_mbytes_per_sec": 0, 00:15:02.342 "w_mbytes_per_sec": 0 00:15:02.342 }, 00:15:02.342 "claimed": false, 00:15:02.342 "zoned": false, 00:15:02.342 "supported_io_types": { 00:15:02.342 "read": true, 00:15:02.342 "write": true, 00:15:02.342 "unmap": true, 00:15:02.342 "flush": true, 00:15:02.342 "reset": true, 00:15:02.342 "nvme_admin": false, 00:15:02.342 "nvme_io": false, 00:15:02.342 "nvme_io_md": false, 00:15:02.342 "write_zeroes": true, 00:15:02.342 "zcopy": true, 00:15:02.342 "get_zone_info": false, 00:15:02.342 "zone_management": false, 00:15:02.342 "zone_append": false, 00:15:02.342 "compare": false, 00:15:02.342 "compare_and_write": false, 00:15:02.342 "abort": true, 00:15:02.342 "seek_hole": false, 00:15:02.342 "seek_data": false, 00:15:02.342 "copy": true, 00:15:02.342 "nvme_iov_md": false 00:15:02.342 }, 00:15:02.342 "memory_domains": [ 00:15:02.342 { 00:15:02.342 "dma_device_id": "system", 00:15:02.342 "dma_device_type": 1 00:15:02.342 }, 00:15:02.342 { 00:15:02.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.342 "dma_device_type": 2 00:15:02.342 } 00:15:02.342 ], 00:15:02.342 "driver_specific": {} 00:15:02.342 } 00:15:02.342 ] 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.342 BaseBdev3 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.342 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.602 [ 00:15:02.602 { 00:15:02.602 "name": "BaseBdev3", 00:15:02.602 "aliases": [ 00:15:02.602 "a8f5a2f4-e0bb-415f-a571-df2ceadef2fd" 00:15:02.602 ], 00:15:02.602 "product_name": "Malloc disk", 00:15:02.602 "block_size": 512, 00:15:02.602 "num_blocks": 65536, 00:15:02.602 "uuid": "a8f5a2f4-e0bb-415f-a571-df2ceadef2fd", 00:15:02.602 "assigned_rate_limits": { 00:15:02.602 "rw_ios_per_sec": 0, 00:15:02.602 "rw_mbytes_per_sec": 0, 00:15:02.602 "r_mbytes_per_sec": 0, 00:15:02.602 "w_mbytes_per_sec": 0 00:15:02.602 }, 00:15:02.602 "claimed": false, 00:15:02.602 "zoned": false, 00:15:02.602 "supported_io_types": { 00:15:02.602 "read": true, 00:15:02.602 "write": true, 00:15:02.602 "unmap": true, 00:15:02.602 "flush": true, 00:15:02.602 "reset": true, 00:15:02.602 "nvme_admin": false, 00:15:02.602 "nvme_io": false, 00:15:02.602 "nvme_io_md": false, 00:15:02.602 "write_zeroes": true, 00:15:02.602 "zcopy": true, 00:15:02.602 "get_zone_info": false, 00:15:02.602 "zone_management": false, 00:15:02.602 "zone_append": false, 00:15:02.602 "compare": false, 00:15:02.602 "compare_and_write": false, 00:15:02.602 "abort": true, 00:15:02.602 "seek_hole": false, 00:15:02.602 "seek_data": false, 00:15:02.602 "copy": true, 00:15:02.602 "nvme_iov_md": false 00:15:02.602 }, 00:15:02.602 "memory_domains": [ 00:15:02.602 { 00:15:02.602 "dma_device_id": "system", 00:15:02.602 "dma_device_type": 1 00:15:02.602 }, 00:15:02.602 { 00:15:02.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.602 "dma_device_type": 2 00:15:02.602 } 00:15:02.602 ], 00:15:02.602 "driver_specific": {} 00:15:02.602 } 00:15:02.602 ] 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.602 BaseBdev4 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.602 [ 00:15:02.602 { 00:15:02.602 "name": "BaseBdev4", 00:15:02.602 "aliases": [ 00:15:02.602 "93463a25-43c5-4f0c-bb09-1af78a4c667e" 00:15:02.602 ], 00:15:02.602 "product_name": "Malloc disk", 00:15:02.602 "block_size": 512, 00:15:02.602 "num_blocks": 65536, 00:15:02.602 "uuid": "93463a25-43c5-4f0c-bb09-1af78a4c667e", 00:15:02.602 "assigned_rate_limits": { 00:15:02.602 "rw_ios_per_sec": 0, 00:15:02.602 "rw_mbytes_per_sec": 0, 00:15:02.602 "r_mbytes_per_sec": 0, 00:15:02.602 "w_mbytes_per_sec": 0 00:15:02.602 }, 00:15:02.602 "claimed": false, 00:15:02.602 "zoned": false, 00:15:02.602 "supported_io_types": { 00:15:02.602 "read": true, 00:15:02.602 "write": true, 00:15:02.602 "unmap": true, 00:15:02.602 "flush": true, 00:15:02.602 "reset": true, 00:15:02.602 "nvme_admin": false, 00:15:02.602 "nvme_io": false, 00:15:02.602 "nvme_io_md": false, 00:15:02.602 "write_zeroes": true, 00:15:02.602 "zcopy": true, 00:15:02.602 "get_zone_info": false, 00:15:02.602 "zone_management": false, 00:15:02.602 "zone_append": false, 00:15:02.602 "compare": false, 00:15:02.602 "compare_and_write": false, 00:15:02.602 "abort": true, 00:15:02.602 "seek_hole": false, 00:15:02.602 "seek_data": false, 00:15:02.602 "copy": true, 00:15:02.602 "nvme_iov_md": false 00:15:02.602 }, 00:15:02.602 "memory_domains": [ 00:15:02.602 { 00:15:02.602 "dma_device_id": "system", 00:15:02.602 "dma_device_type": 1 00:15:02.602 }, 00:15:02.602 { 00:15:02.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.602 "dma_device_type": 2 00:15:02.602 } 00:15:02.602 ], 00:15:02.602 "driver_specific": {} 00:15:02.602 } 00:15:02.602 ] 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.602 [2024-12-07 02:48:13.504896] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:02.602 [2024-12-07 02:48:13.504975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:02.602 [2024-12-07 02:48:13.505012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.602 [2024-12-07 02:48:13.506798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.602 [2024-12-07 02:48:13.506882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.602 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.602 "name": "Existed_Raid", 00:15:02.602 "uuid": "4e13deda-a8ae-430a-86ae-a8431117c953", 00:15:02.602 "strip_size_kb": 64, 00:15:02.602 "state": "configuring", 00:15:02.602 "raid_level": "raid5f", 00:15:02.602 "superblock": true, 00:15:02.602 "num_base_bdevs": 4, 00:15:02.602 "num_base_bdevs_discovered": 3, 00:15:02.602 "num_base_bdevs_operational": 4, 00:15:02.602 "base_bdevs_list": [ 00:15:02.602 { 00:15:02.602 "name": "BaseBdev1", 00:15:02.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.602 "is_configured": false, 00:15:02.602 "data_offset": 0, 00:15:02.602 "data_size": 0 00:15:02.602 }, 00:15:02.602 { 00:15:02.602 "name": "BaseBdev2", 00:15:02.602 "uuid": "17321eda-c608-412a-91a8-70a54e21f8e7", 00:15:02.602 "is_configured": true, 00:15:02.602 "data_offset": 2048, 00:15:02.602 "data_size": 63488 00:15:02.602 }, 00:15:02.602 { 00:15:02.602 "name": "BaseBdev3", 00:15:02.602 "uuid": "a8f5a2f4-e0bb-415f-a571-df2ceadef2fd", 00:15:02.602 "is_configured": true, 00:15:02.602 "data_offset": 2048, 00:15:02.602 "data_size": 63488 00:15:02.602 }, 00:15:02.602 { 00:15:02.602 "name": "BaseBdev4", 00:15:02.602 "uuid": "93463a25-43c5-4f0c-bb09-1af78a4c667e", 00:15:02.602 "is_configured": true, 00:15:02.602 "data_offset": 2048, 00:15:02.603 "data_size": 63488 00:15:02.603 } 00:15:02.603 ] 00:15:02.603 }' 00:15:02.603 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.603 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.172 [2024-12-07 02:48:13.976057] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.172 02:48:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.172 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.172 "name": "Existed_Raid", 00:15:03.172 "uuid": "4e13deda-a8ae-430a-86ae-a8431117c953", 00:15:03.172 "strip_size_kb": 64, 00:15:03.172 "state": "configuring", 00:15:03.172 "raid_level": "raid5f", 00:15:03.172 "superblock": true, 00:15:03.172 "num_base_bdevs": 4, 00:15:03.172 "num_base_bdevs_discovered": 2, 00:15:03.172 "num_base_bdevs_operational": 4, 00:15:03.172 "base_bdevs_list": [ 00:15:03.172 { 00:15:03.172 "name": "BaseBdev1", 00:15:03.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.172 "is_configured": false, 00:15:03.172 "data_offset": 0, 00:15:03.172 "data_size": 0 00:15:03.172 }, 00:15:03.172 { 00:15:03.172 "name": null, 00:15:03.172 "uuid": "17321eda-c608-412a-91a8-70a54e21f8e7", 00:15:03.172 "is_configured": false, 00:15:03.172 "data_offset": 0, 00:15:03.172 "data_size": 63488 00:15:03.172 }, 00:15:03.172 { 00:15:03.172 "name": "BaseBdev3", 00:15:03.172 "uuid": "a8f5a2f4-e0bb-415f-a571-df2ceadef2fd", 00:15:03.172 "is_configured": true, 00:15:03.172 "data_offset": 2048, 00:15:03.172 "data_size": 63488 00:15:03.172 }, 00:15:03.172 { 00:15:03.172 "name": "BaseBdev4", 00:15:03.172 "uuid": "93463a25-43c5-4f0c-bb09-1af78a4c667e", 00:15:03.172 "is_configured": true, 00:15:03.172 "data_offset": 2048, 00:15:03.172 "data_size": 63488 00:15:03.172 } 00:15:03.172 ] 00:15:03.172 }' 00:15:03.172 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.172 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 [2024-12-07 02:48:14.414330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.432 BaseBdev1 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 [ 00:15:03.432 { 00:15:03.432 "name": "BaseBdev1", 00:15:03.432 "aliases": [ 00:15:03.432 "3fe7fe34-9c67-4279-b7e4-9fc0bc65530c" 00:15:03.432 ], 00:15:03.432 "product_name": "Malloc disk", 00:15:03.432 "block_size": 512, 00:15:03.432 "num_blocks": 65536, 00:15:03.432 "uuid": "3fe7fe34-9c67-4279-b7e4-9fc0bc65530c", 00:15:03.432 "assigned_rate_limits": { 00:15:03.432 "rw_ios_per_sec": 0, 00:15:03.432 "rw_mbytes_per_sec": 0, 00:15:03.432 "r_mbytes_per_sec": 0, 00:15:03.432 "w_mbytes_per_sec": 0 00:15:03.432 }, 00:15:03.432 "claimed": true, 00:15:03.432 "claim_type": "exclusive_write", 00:15:03.432 "zoned": false, 00:15:03.432 "supported_io_types": { 00:15:03.432 "read": true, 00:15:03.432 "write": true, 00:15:03.432 "unmap": true, 00:15:03.432 "flush": true, 00:15:03.432 "reset": true, 00:15:03.432 "nvme_admin": false, 00:15:03.432 "nvme_io": false, 00:15:03.432 "nvme_io_md": false, 00:15:03.432 "write_zeroes": true, 00:15:03.432 "zcopy": true, 00:15:03.432 "get_zone_info": false, 00:15:03.432 "zone_management": false, 00:15:03.432 "zone_append": false, 00:15:03.432 "compare": false, 00:15:03.432 "compare_and_write": false, 00:15:03.432 "abort": true, 00:15:03.432 "seek_hole": false, 00:15:03.432 "seek_data": false, 00:15:03.432 "copy": true, 00:15:03.432 "nvme_iov_md": false 00:15:03.432 }, 00:15:03.432 "memory_domains": [ 00:15:03.432 { 00:15:03.432 "dma_device_id": "system", 00:15:03.432 "dma_device_type": 1 00:15:03.432 }, 00:15:03.432 { 00:15:03.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.432 "dma_device_type": 2 00:15:03.432 } 00:15:03.432 ], 00:15:03.432 "driver_specific": {} 00:15:03.432 } 00:15:03.432 ] 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.432 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.432 "name": "Existed_Raid", 00:15:03.432 "uuid": "4e13deda-a8ae-430a-86ae-a8431117c953", 00:15:03.432 "strip_size_kb": 64, 00:15:03.432 "state": "configuring", 00:15:03.432 "raid_level": "raid5f", 00:15:03.432 "superblock": true, 00:15:03.432 "num_base_bdevs": 4, 00:15:03.432 "num_base_bdevs_discovered": 3, 00:15:03.432 "num_base_bdevs_operational": 4, 00:15:03.432 "base_bdevs_list": [ 00:15:03.432 { 00:15:03.432 "name": "BaseBdev1", 00:15:03.432 "uuid": "3fe7fe34-9c67-4279-b7e4-9fc0bc65530c", 00:15:03.432 "is_configured": true, 00:15:03.432 "data_offset": 2048, 00:15:03.432 "data_size": 63488 00:15:03.432 }, 00:15:03.432 { 00:15:03.432 "name": null, 00:15:03.432 "uuid": "17321eda-c608-412a-91a8-70a54e21f8e7", 00:15:03.432 "is_configured": false, 00:15:03.432 "data_offset": 0, 00:15:03.432 "data_size": 63488 00:15:03.432 }, 00:15:03.432 { 00:15:03.432 "name": "BaseBdev3", 00:15:03.432 "uuid": "a8f5a2f4-e0bb-415f-a571-df2ceadef2fd", 00:15:03.432 "is_configured": true, 00:15:03.432 "data_offset": 2048, 00:15:03.432 "data_size": 63488 00:15:03.432 }, 00:15:03.432 { 00:15:03.432 "name": "BaseBdev4", 00:15:03.433 "uuid": "93463a25-43c5-4f0c-bb09-1af78a4c667e", 00:15:03.433 "is_configured": true, 00:15:03.433 "data_offset": 2048, 00:15:03.433 "data_size": 63488 00:15:03.433 } 00:15:03.433 ] 00:15:03.433 }' 00:15:03.433 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.433 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.001 [2024-12-07 02:48:14.933471] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.001 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.001 "name": "Existed_Raid", 00:15:04.001 "uuid": "4e13deda-a8ae-430a-86ae-a8431117c953", 00:15:04.001 "strip_size_kb": 64, 00:15:04.001 "state": "configuring", 00:15:04.001 "raid_level": "raid5f", 00:15:04.001 "superblock": true, 00:15:04.001 "num_base_bdevs": 4, 00:15:04.001 "num_base_bdevs_discovered": 2, 00:15:04.001 "num_base_bdevs_operational": 4, 00:15:04.002 "base_bdevs_list": [ 00:15:04.002 { 00:15:04.002 "name": "BaseBdev1", 00:15:04.002 "uuid": "3fe7fe34-9c67-4279-b7e4-9fc0bc65530c", 00:15:04.002 "is_configured": true, 00:15:04.002 "data_offset": 2048, 00:15:04.002 "data_size": 63488 00:15:04.002 }, 00:15:04.002 { 00:15:04.002 "name": null, 00:15:04.002 "uuid": "17321eda-c608-412a-91a8-70a54e21f8e7", 00:15:04.002 "is_configured": false, 00:15:04.002 "data_offset": 0, 00:15:04.002 "data_size": 63488 00:15:04.002 }, 00:15:04.002 { 00:15:04.002 "name": null, 00:15:04.002 "uuid": "a8f5a2f4-e0bb-415f-a571-df2ceadef2fd", 00:15:04.002 "is_configured": false, 00:15:04.002 "data_offset": 0, 00:15:04.002 "data_size": 63488 00:15:04.002 }, 00:15:04.002 { 00:15:04.002 "name": "BaseBdev4", 00:15:04.002 "uuid": "93463a25-43c5-4f0c-bb09-1af78a4c667e", 00:15:04.002 "is_configured": true, 00:15:04.002 "data_offset": 2048, 00:15:04.002 "data_size": 63488 00:15:04.002 } 00:15:04.002 ] 00:15:04.002 }' 00:15:04.002 02:48:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.002 02:48:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.567 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.567 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.567 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.567 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:04.567 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.567 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:04.567 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:04.567 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.567 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.567 [2024-12-07 02:48:15.436687] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.567 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.567 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:04.567 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.568 "name": "Existed_Raid", 00:15:04.568 "uuid": "4e13deda-a8ae-430a-86ae-a8431117c953", 00:15:04.568 "strip_size_kb": 64, 00:15:04.568 "state": "configuring", 00:15:04.568 "raid_level": "raid5f", 00:15:04.568 "superblock": true, 00:15:04.568 "num_base_bdevs": 4, 00:15:04.568 "num_base_bdevs_discovered": 3, 00:15:04.568 "num_base_bdevs_operational": 4, 00:15:04.568 "base_bdevs_list": [ 00:15:04.568 { 00:15:04.568 "name": "BaseBdev1", 00:15:04.568 "uuid": "3fe7fe34-9c67-4279-b7e4-9fc0bc65530c", 00:15:04.568 "is_configured": true, 00:15:04.568 "data_offset": 2048, 00:15:04.568 "data_size": 63488 00:15:04.568 }, 00:15:04.568 { 00:15:04.568 "name": null, 00:15:04.568 "uuid": "17321eda-c608-412a-91a8-70a54e21f8e7", 00:15:04.568 "is_configured": false, 00:15:04.568 "data_offset": 0, 00:15:04.568 "data_size": 63488 00:15:04.568 }, 00:15:04.568 { 00:15:04.568 "name": "BaseBdev3", 00:15:04.568 "uuid": "a8f5a2f4-e0bb-415f-a571-df2ceadef2fd", 00:15:04.568 "is_configured": true, 00:15:04.568 "data_offset": 2048, 00:15:04.568 "data_size": 63488 00:15:04.568 }, 00:15:04.568 { 00:15:04.568 "name": "BaseBdev4", 00:15:04.568 "uuid": "93463a25-43c5-4f0c-bb09-1af78a4c667e", 00:15:04.568 "is_configured": true, 00:15:04.568 "data_offset": 2048, 00:15:04.568 "data_size": 63488 00:15:04.568 } 00:15:04.568 ] 00:15:04.568 }' 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.568 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.826 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.826 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:04.826 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.826 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.826 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.085 [2024-12-07 02:48:15.935869] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.085 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.085 "name": "Existed_Raid", 00:15:05.085 "uuid": "4e13deda-a8ae-430a-86ae-a8431117c953", 00:15:05.086 "strip_size_kb": 64, 00:15:05.086 "state": "configuring", 00:15:05.086 "raid_level": "raid5f", 00:15:05.086 "superblock": true, 00:15:05.086 "num_base_bdevs": 4, 00:15:05.086 "num_base_bdevs_discovered": 2, 00:15:05.086 "num_base_bdevs_operational": 4, 00:15:05.086 "base_bdevs_list": [ 00:15:05.086 { 00:15:05.086 "name": null, 00:15:05.086 "uuid": "3fe7fe34-9c67-4279-b7e4-9fc0bc65530c", 00:15:05.086 "is_configured": false, 00:15:05.086 "data_offset": 0, 00:15:05.086 "data_size": 63488 00:15:05.086 }, 00:15:05.086 { 00:15:05.086 "name": null, 00:15:05.086 "uuid": "17321eda-c608-412a-91a8-70a54e21f8e7", 00:15:05.086 "is_configured": false, 00:15:05.086 "data_offset": 0, 00:15:05.086 "data_size": 63488 00:15:05.086 }, 00:15:05.086 { 00:15:05.086 "name": "BaseBdev3", 00:15:05.086 "uuid": "a8f5a2f4-e0bb-415f-a571-df2ceadef2fd", 00:15:05.086 "is_configured": true, 00:15:05.086 "data_offset": 2048, 00:15:05.086 "data_size": 63488 00:15:05.086 }, 00:15:05.086 { 00:15:05.086 "name": "BaseBdev4", 00:15:05.086 "uuid": "93463a25-43c5-4f0c-bb09-1af78a4c667e", 00:15:05.086 "is_configured": true, 00:15:05.086 "data_offset": 2048, 00:15:05.086 "data_size": 63488 00:15:05.086 } 00:15:05.086 ] 00:15:05.086 }' 00:15:05.086 02:48:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.086 02:48:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.344 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:05.344 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.345 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.345 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.603 [2024-12-07 02:48:16.457673] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.603 "name": "Existed_Raid", 00:15:05.603 "uuid": "4e13deda-a8ae-430a-86ae-a8431117c953", 00:15:05.603 "strip_size_kb": 64, 00:15:05.603 "state": "configuring", 00:15:05.603 "raid_level": "raid5f", 00:15:05.603 "superblock": true, 00:15:05.603 "num_base_bdevs": 4, 00:15:05.603 "num_base_bdevs_discovered": 3, 00:15:05.603 "num_base_bdevs_operational": 4, 00:15:05.603 "base_bdevs_list": [ 00:15:05.603 { 00:15:05.603 "name": null, 00:15:05.603 "uuid": "3fe7fe34-9c67-4279-b7e4-9fc0bc65530c", 00:15:05.603 "is_configured": false, 00:15:05.603 "data_offset": 0, 00:15:05.603 "data_size": 63488 00:15:05.603 }, 00:15:05.603 { 00:15:05.603 "name": "BaseBdev2", 00:15:05.603 "uuid": "17321eda-c608-412a-91a8-70a54e21f8e7", 00:15:05.603 "is_configured": true, 00:15:05.603 "data_offset": 2048, 00:15:05.603 "data_size": 63488 00:15:05.603 }, 00:15:05.603 { 00:15:05.603 "name": "BaseBdev3", 00:15:05.603 "uuid": "a8f5a2f4-e0bb-415f-a571-df2ceadef2fd", 00:15:05.603 "is_configured": true, 00:15:05.603 "data_offset": 2048, 00:15:05.603 "data_size": 63488 00:15:05.603 }, 00:15:05.603 { 00:15:05.603 "name": "BaseBdev4", 00:15:05.603 "uuid": "93463a25-43c5-4f0c-bb09-1af78a4c667e", 00:15:05.603 "is_configured": true, 00:15:05.603 "data_offset": 2048, 00:15:05.603 "data_size": 63488 00:15:05.603 } 00:15:05.603 ] 00:15:05.603 }' 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.603 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.861 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.861 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:05.861 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.861 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.861 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.861 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:05.861 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:05.861 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.861 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.861 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3fe7fe34-9c67-4279-b7e4-9fc0bc65530c 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.119 [2024-12-07 02:48:16.967389] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:06.119 [2024-12-07 02:48:16.967565] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:06.119 [2024-12-07 02:48:16.967578] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:06.119 [2024-12-07 02:48:16.967826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:06.119 NewBaseBdev 00:15:06.119 [2024-12-07 02:48:16.968259] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:06.119 [2024-12-07 02:48:16.968284] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006d00 00:15:06.119 [2024-12-07 02:48:16.968381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.119 02:48:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.119 [ 00:15:06.119 { 00:15:06.119 "name": "NewBaseBdev", 00:15:06.119 "aliases": [ 00:15:06.119 "3fe7fe34-9c67-4279-b7e4-9fc0bc65530c" 00:15:06.119 ], 00:15:06.119 "product_name": "Malloc disk", 00:15:06.119 "block_size": 512, 00:15:06.119 "num_blocks": 65536, 00:15:06.119 "uuid": "3fe7fe34-9c67-4279-b7e4-9fc0bc65530c", 00:15:06.119 "assigned_rate_limits": { 00:15:06.119 "rw_ios_per_sec": 0, 00:15:06.119 "rw_mbytes_per_sec": 0, 00:15:06.119 "r_mbytes_per_sec": 0, 00:15:06.119 "w_mbytes_per_sec": 0 00:15:06.119 }, 00:15:06.119 "claimed": true, 00:15:06.119 "claim_type": "exclusive_write", 00:15:06.119 "zoned": false, 00:15:06.119 "supported_io_types": { 00:15:06.119 "read": true, 00:15:06.119 "write": true, 00:15:06.119 "unmap": true, 00:15:06.119 "flush": true, 00:15:06.119 "reset": true, 00:15:06.119 "nvme_admin": false, 00:15:06.119 "nvme_io": false, 00:15:06.119 "nvme_io_md": false, 00:15:06.119 "write_zeroes": true, 00:15:06.119 "zcopy": true, 00:15:06.119 "get_zone_info": false, 00:15:06.119 "zone_management": false, 00:15:06.119 "zone_append": false, 00:15:06.119 "compare": false, 00:15:06.119 "compare_and_write": false, 00:15:06.119 "abort": true, 00:15:06.119 "seek_hole": false, 00:15:06.119 "seek_data": false, 00:15:06.119 "copy": true, 00:15:06.119 "nvme_iov_md": false 00:15:06.119 }, 00:15:06.119 "memory_domains": [ 00:15:06.119 { 00:15:06.119 "dma_device_id": "system", 00:15:06.119 "dma_device_type": 1 00:15:06.119 }, 00:15:06.119 { 00:15:06.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.119 "dma_device_type": 2 00:15:06.119 } 00:15:06.119 ], 00:15:06.119 "driver_specific": {} 00:15:06.119 } 00:15:06.119 ] 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.119 "name": "Existed_Raid", 00:15:06.119 "uuid": "4e13deda-a8ae-430a-86ae-a8431117c953", 00:15:06.119 "strip_size_kb": 64, 00:15:06.119 "state": "online", 00:15:06.119 "raid_level": "raid5f", 00:15:06.119 "superblock": true, 00:15:06.119 "num_base_bdevs": 4, 00:15:06.119 "num_base_bdevs_discovered": 4, 00:15:06.119 "num_base_bdevs_operational": 4, 00:15:06.119 "base_bdevs_list": [ 00:15:06.119 { 00:15:06.119 "name": "NewBaseBdev", 00:15:06.119 "uuid": "3fe7fe34-9c67-4279-b7e4-9fc0bc65530c", 00:15:06.119 "is_configured": true, 00:15:06.119 "data_offset": 2048, 00:15:06.119 "data_size": 63488 00:15:06.119 }, 00:15:06.119 { 00:15:06.119 "name": "BaseBdev2", 00:15:06.119 "uuid": "17321eda-c608-412a-91a8-70a54e21f8e7", 00:15:06.119 "is_configured": true, 00:15:06.119 "data_offset": 2048, 00:15:06.119 "data_size": 63488 00:15:06.119 }, 00:15:06.119 { 00:15:06.119 "name": "BaseBdev3", 00:15:06.119 "uuid": "a8f5a2f4-e0bb-415f-a571-df2ceadef2fd", 00:15:06.119 "is_configured": true, 00:15:06.119 "data_offset": 2048, 00:15:06.119 "data_size": 63488 00:15:06.119 }, 00:15:06.119 { 00:15:06.119 "name": "BaseBdev4", 00:15:06.119 "uuid": "93463a25-43c5-4f0c-bb09-1af78a4c667e", 00:15:06.119 "is_configured": true, 00:15:06.119 "data_offset": 2048, 00:15:06.119 "data_size": 63488 00:15:06.119 } 00:15:06.119 ] 00:15:06.119 }' 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.119 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.377 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:06.377 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:06.377 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:06.377 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:06.377 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:06.377 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:06.377 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:06.377 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:06.377 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.377 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.377 [2024-12-07 02:48:17.446777] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.636 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.636 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:06.636 "name": "Existed_Raid", 00:15:06.636 "aliases": [ 00:15:06.637 "4e13deda-a8ae-430a-86ae-a8431117c953" 00:15:06.637 ], 00:15:06.637 "product_name": "Raid Volume", 00:15:06.637 "block_size": 512, 00:15:06.637 "num_blocks": 190464, 00:15:06.637 "uuid": "4e13deda-a8ae-430a-86ae-a8431117c953", 00:15:06.637 "assigned_rate_limits": { 00:15:06.637 "rw_ios_per_sec": 0, 00:15:06.637 "rw_mbytes_per_sec": 0, 00:15:06.637 "r_mbytes_per_sec": 0, 00:15:06.637 "w_mbytes_per_sec": 0 00:15:06.637 }, 00:15:06.637 "claimed": false, 00:15:06.637 "zoned": false, 00:15:06.637 "supported_io_types": { 00:15:06.637 "read": true, 00:15:06.637 "write": true, 00:15:06.637 "unmap": false, 00:15:06.637 "flush": false, 00:15:06.637 "reset": true, 00:15:06.637 "nvme_admin": false, 00:15:06.637 "nvme_io": false, 00:15:06.637 "nvme_io_md": false, 00:15:06.637 "write_zeroes": true, 00:15:06.637 "zcopy": false, 00:15:06.637 "get_zone_info": false, 00:15:06.637 "zone_management": false, 00:15:06.637 "zone_append": false, 00:15:06.637 "compare": false, 00:15:06.637 "compare_and_write": false, 00:15:06.637 "abort": false, 00:15:06.637 "seek_hole": false, 00:15:06.637 "seek_data": false, 00:15:06.637 "copy": false, 00:15:06.637 "nvme_iov_md": false 00:15:06.637 }, 00:15:06.637 "driver_specific": { 00:15:06.637 "raid": { 00:15:06.637 "uuid": "4e13deda-a8ae-430a-86ae-a8431117c953", 00:15:06.637 "strip_size_kb": 64, 00:15:06.637 "state": "online", 00:15:06.637 "raid_level": "raid5f", 00:15:06.637 "superblock": true, 00:15:06.637 "num_base_bdevs": 4, 00:15:06.637 "num_base_bdevs_discovered": 4, 00:15:06.637 "num_base_bdevs_operational": 4, 00:15:06.637 "base_bdevs_list": [ 00:15:06.637 { 00:15:06.637 "name": "NewBaseBdev", 00:15:06.637 "uuid": "3fe7fe34-9c67-4279-b7e4-9fc0bc65530c", 00:15:06.637 "is_configured": true, 00:15:06.637 "data_offset": 2048, 00:15:06.637 "data_size": 63488 00:15:06.637 }, 00:15:06.637 { 00:15:06.637 "name": "BaseBdev2", 00:15:06.637 "uuid": "17321eda-c608-412a-91a8-70a54e21f8e7", 00:15:06.637 "is_configured": true, 00:15:06.637 "data_offset": 2048, 00:15:06.637 "data_size": 63488 00:15:06.637 }, 00:15:06.637 { 00:15:06.637 "name": "BaseBdev3", 00:15:06.637 "uuid": "a8f5a2f4-e0bb-415f-a571-df2ceadef2fd", 00:15:06.637 "is_configured": true, 00:15:06.637 "data_offset": 2048, 00:15:06.637 "data_size": 63488 00:15:06.637 }, 00:15:06.637 { 00:15:06.637 "name": "BaseBdev4", 00:15:06.637 "uuid": "93463a25-43c5-4f0c-bb09-1af78a4c667e", 00:15:06.637 "is_configured": true, 00:15:06.637 "data_offset": 2048, 00:15:06.637 "data_size": 63488 00:15:06.637 } 00:15:06.637 ] 00:15:06.637 } 00:15:06.637 } 00:15:06.637 }' 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:06.637 BaseBdev2 00:15:06.637 BaseBdev3 00:15:06.637 BaseBdev4' 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.637 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.895 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.896 [2024-12-07 02:48:17.770022] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.896 [2024-12-07 02:48:17.770047] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.896 [2024-12-07 02:48:17.770115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.896 [2024-12-07 02:48:17.770358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.896 [2024-12-07 02:48:17.770374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name Existed_Raid, state offline 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 94164 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 94164 ']' 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 94164 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94164 00:15:06.896 killing process with pid 94164 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94164' 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 94164 00:15:06.896 [2024-12-07 02:48:17.820916] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.896 02:48:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 94164 00:15:06.896 [2024-12-07 02:48:17.861543] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.155 02:48:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:07.155 ************************************ 00:15:07.155 END TEST raid5f_state_function_test_sb 00:15:07.155 ************************************ 00:15:07.155 00:15:07.155 real 0m9.611s 00:15:07.155 user 0m16.342s 00:15:07.155 sys 0m2.123s 00:15:07.155 02:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.155 02:48:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.155 02:48:18 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:07.155 02:48:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:07.155 02:48:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.155 02:48:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:07.155 ************************************ 00:15:07.155 START TEST raid5f_superblock_test 00:15:07.155 ************************************ 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94814 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94814 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94814 ']' 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.155 02:48:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.414 [2024-12-07 02:48:18.283630] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:07.414 [2024-12-07 02:48:18.283754] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94814 ] 00:15:07.414 [2024-12-07 02:48:18.443132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.414 [2024-12-07 02:48:18.488383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.673 [2024-12-07 02:48:18.531035] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:07.673 [2024-12-07 02:48:18.531067] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.241 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.242 malloc1 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.242 [2024-12-07 02:48:19.109574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:08.242 [2024-12-07 02:48:19.109701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.242 [2024-12-07 02:48:19.109742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:08.242 [2024-12-07 02:48:19.109778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.242 [2024-12-07 02:48:19.111775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.242 [2024-12-07 02:48:19.111848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:08.242 pt1 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.242 malloc2 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.242 [2024-12-07 02:48:19.156714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:08.242 [2024-12-07 02:48:19.156897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.242 [2024-12-07 02:48:19.156972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:08.242 [2024-12-07 02:48:19.157050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.242 [2024-12-07 02:48:19.161549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.242 [2024-12-07 02:48:19.161641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:08.242 pt2 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.242 malloc3 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.242 [2024-12-07 02:48:19.187534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:08.242 [2024-12-07 02:48:19.187630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.242 [2024-12-07 02:48:19.187665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:08.242 [2024-12-07 02:48:19.187690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.242 [2024-12-07 02:48:19.189689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.242 [2024-12-07 02:48:19.189760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:08.242 pt3 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.242 malloc4 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.242 [2024-12-07 02:48:19.220009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:08.242 [2024-12-07 02:48:19.220092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.242 [2024-12-07 02:48:19.220122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:08.242 [2024-12-07 02:48:19.220151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.242 [2024-12-07 02:48:19.222108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.242 [2024-12-07 02:48:19.222179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:08.242 pt4 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.242 [2024-12-07 02:48:19.232059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:08.242 [2024-12-07 02:48:19.233855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:08.242 [2024-12-07 02:48:19.233945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:08.242 [2024-12-07 02:48:19.234019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:08.242 [2024-12-07 02:48:19.234229] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:08.242 [2024-12-07 02:48:19.234275] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:08.242 [2024-12-07 02:48:19.234513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:08.242 [2024-12-07 02:48:19.234971] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:08.242 [2024-12-07 02:48:19.234983] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:08.242 [2024-12-07 02:48:19.235101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.242 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.242 "name": "raid_bdev1", 00:15:08.242 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:08.242 "strip_size_kb": 64, 00:15:08.242 "state": "online", 00:15:08.242 "raid_level": "raid5f", 00:15:08.242 "superblock": true, 00:15:08.242 "num_base_bdevs": 4, 00:15:08.242 "num_base_bdevs_discovered": 4, 00:15:08.242 "num_base_bdevs_operational": 4, 00:15:08.242 "base_bdevs_list": [ 00:15:08.242 { 00:15:08.243 "name": "pt1", 00:15:08.243 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.243 "is_configured": true, 00:15:08.243 "data_offset": 2048, 00:15:08.243 "data_size": 63488 00:15:08.243 }, 00:15:08.243 { 00:15:08.243 "name": "pt2", 00:15:08.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.243 "is_configured": true, 00:15:08.243 "data_offset": 2048, 00:15:08.243 "data_size": 63488 00:15:08.243 }, 00:15:08.243 { 00:15:08.243 "name": "pt3", 00:15:08.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.243 "is_configured": true, 00:15:08.243 "data_offset": 2048, 00:15:08.243 "data_size": 63488 00:15:08.243 }, 00:15:08.243 { 00:15:08.243 "name": "pt4", 00:15:08.243 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:08.243 "is_configured": true, 00:15:08.243 "data_offset": 2048, 00:15:08.243 "data_size": 63488 00:15:08.243 } 00:15:08.243 ] 00:15:08.243 }' 00:15:08.243 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.243 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.811 [2024-12-07 02:48:19.740232] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:08.811 "name": "raid_bdev1", 00:15:08.811 "aliases": [ 00:15:08.811 "b900a5c7-e69a-4149-9119-9c12f20a5464" 00:15:08.811 ], 00:15:08.811 "product_name": "Raid Volume", 00:15:08.811 "block_size": 512, 00:15:08.811 "num_blocks": 190464, 00:15:08.811 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:08.811 "assigned_rate_limits": { 00:15:08.811 "rw_ios_per_sec": 0, 00:15:08.811 "rw_mbytes_per_sec": 0, 00:15:08.811 "r_mbytes_per_sec": 0, 00:15:08.811 "w_mbytes_per_sec": 0 00:15:08.811 }, 00:15:08.811 "claimed": false, 00:15:08.811 "zoned": false, 00:15:08.811 "supported_io_types": { 00:15:08.811 "read": true, 00:15:08.811 "write": true, 00:15:08.811 "unmap": false, 00:15:08.811 "flush": false, 00:15:08.811 "reset": true, 00:15:08.811 "nvme_admin": false, 00:15:08.811 "nvme_io": false, 00:15:08.811 "nvme_io_md": false, 00:15:08.811 "write_zeroes": true, 00:15:08.811 "zcopy": false, 00:15:08.811 "get_zone_info": false, 00:15:08.811 "zone_management": false, 00:15:08.811 "zone_append": false, 00:15:08.811 "compare": false, 00:15:08.811 "compare_and_write": false, 00:15:08.811 "abort": false, 00:15:08.811 "seek_hole": false, 00:15:08.811 "seek_data": false, 00:15:08.811 "copy": false, 00:15:08.811 "nvme_iov_md": false 00:15:08.811 }, 00:15:08.811 "driver_specific": { 00:15:08.811 "raid": { 00:15:08.811 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:08.811 "strip_size_kb": 64, 00:15:08.811 "state": "online", 00:15:08.811 "raid_level": "raid5f", 00:15:08.811 "superblock": true, 00:15:08.811 "num_base_bdevs": 4, 00:15:08.811 "num_base_bdevs_discovered": 4, 00:15:08.811 "num_base_bdevs_operational": 4, 00:15:08.811 "base_bdevs_list": [ 00:15:08.811 { 00:15:08.811 "name": "pt1", 00:15:08.811 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.811 "is_configured": true, 00:15:08.811 "data_offset": 2048, 00:15:08.811 "data_size": 63488 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "name": "pt2", 00:15:08.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.811 "is_configured": true, 00:15:08.811 "data_offset": 2048, 00:15:08.811 "data_size": 63488 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "name": "pt3", 00:15:08.811 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.811 "is_configured": true, 00:15:08.811 "data_offset": 2048, 00:15:08.811 "data_size": 63488 00:15:08.811 }, 00:15:08.811 { 00:15:08.811 "name": "pt4", 00:15:08.811 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:08.811 "is_configured": true, 00:15:08.811 "data_offset": 2048, 00:15:08.811 "data_size": 63488 00:15:08.811 } 00:15:08.811 ] 00:15:08.811 } 00:15:08.811 } 00:15:08.811 }' 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:08.811 pt2 00:15:08.811 pt3 00:15:08.811 pt4' 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.811 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.071 02:48:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:09.071 [2024-12-07 02:48:20.019820] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b900a5c7-e69a-4149-9119-9c12f20a5464 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b900a5c7-e69a-4149-9119-9c12f20a5464 ']' 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.071 [2024-12-07 02:48:20.067535] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.071 [2024-12-07 02:48:20.067561] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.071 [2024-12-07 02:48:20.067657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.071 [2024-12-07 02:48:20.067733] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.071 [2024-12-07 02:48:20.067748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.071 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.332 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.332 [2024-12-07 02:48:20.223301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:09.332 [2024-12-07 02:48:20.225087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:09.332 [2024-12-07 02:48:20.225127] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:09.332 [2024-12-07 02:48:20.225153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:09.333 [2024-12-07 02:48:20.225193] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:09.333 [2024-12-07 02:48:20.225231] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:09.333 [2024-12-07 02:48:20.225248] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:09.333 [2024-12-07 02:48:20.225264] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:09.333 [2024-12-07 02:48:20.225277] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.333 [2024-12-07 02:48:20.225286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:15:09.333 request: 00:15:09.333 { 00:15:09.333 "name": "raid_bdev1", 00:15:09.333 "raid_level": "raid5f", 00:15:09.333 "base_bdevs": [ 00:15:09.333 "malloc1", 00:15:09.333 "malloc2", 00:15:09.333 "malloc3", 00:15:09.333 "malloc4" 00:15:09.333 ], 00:15:09.333 "strip_size_kb": 64, 00:15:09.333 "superblock": false, 00:15:09.333 "method": "bdev_raid_create", 00:15:09.333 "req_id": 1 00:15:09.333 } 00:15:09.333 Got JSON-RPC error response 00:15:09.333 response: 00:15:09.333 { 00:15:09.333 "code": -17, 00:15:09.333 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:09.333 } 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.333 [2024-12-07 02:48:20.291137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:09.333 [2024-12-07 02:48:20.291181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.333 [2024-12-07 02:48:20.291201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:09.333 [2024-12-07 02:48:20.291208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.333 [2024-12-07 02:48:20.293247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.333 [2024-12-07 02:48:20.293327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:09.333 [2024-12-07 02:48:20.293392] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:09.333 [2024-12-07 02:48:20.293431] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:09.333 pt1 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.333 "name": "raid_bdev1", 00:15:09.333 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:09.333 "strip_size_kb": 64, 00:15:09.333 "state": "configuring", 00:15:09.333 "raid_level": "raid5f", 00:15:09.333 "superblock": true, 00:15:09.333 "num_base_bdevs": 4, 00:15:09.333 "num_base_bdevs_discovered": 1, 00:15:09.333 "num_base_bdevs_operational": 4, 00:15:09.333 "base_bdevs_list": [ 00:15:09.333 { 00:15:09.333 "name": "pt1", 00:15:09.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.333 "is_configured": true, 00:15:09.333 "data_offset": 2048, 00:15:09.333 "data_size": 63488 00:15:09.333 }, 00:15:09.333 { 00:15:09.333 "name": null, 00:15:09.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.333 "is_configured": false, 00:15:09.333 "data_offset": 2048, 00:15:09.333 "data_size": 63488 00:15:09.333 }, 00:15:09.333 { 00:15:09.333 "name": null, 00:15:09.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.333 "is_configured": false, 00:15:09.333 "data_offset": 2048, 00:15:09.333 "data_size": 63488 00:15:09.333 }, 00:15:09.333 { 00:15:09.333 "name": null, 00:15:09.333 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.333 "is_configured": false, 00:15:09.333 "data_offset": 2048, 00:15:09.333 "data_size": 63488 00:15:09.333 } 00:15:09.333 ] 00:15:09.333 }' 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.333 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.903 [2024-12-07 02:48:20.730372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.903 [2024-12-07 02:48:20.730456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.903 [2024-12-07 02:48:20.730505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:09.903 [2024-12-07 02:48:20.730536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.903 [2024-12-07 02:48:20.730877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.903 [2024-12-07 02:48:20.730930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.903 [2024-12-07 02:48:20.731010] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:09.903 [2024-12-07 02:48:20.731055] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.903 pt2 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.903 [2024-12-07 02:48:20.742370] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.903 "name": "raid_bdev1", 00:15:09.903 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:09.903 "strip_size_kb": 64, 00:15:09.903 "state": "configuring", 00:15:09.903 "raid_level": "raid5f", 00:15:09.903 "superblock": true, 00:15:09.903 "num_base_bdevs": 4, 00:15:09.903 "num_base_bdevs_discovered": 1, 00:15:09.903 "num_base_bdevs_operational": 4, 00:15:09.903 "base_bdevs_list": [ 00:15:09.903 { 00:15:09.903 "name": "pt1", 00:15:09.903 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.903 "is_configured": true, 00:15:09.903 "data_offset": 2048, 00:15:09.903 "data_size": 63488 00:15:09.903 }, 00:15:09.903 { 00:15:09.903 "name": null, 00:15:09.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.903 "is_configured": false, 00:15:09.903 "data_offset": 0, 00:15:09.903 "data_size": 63488 00:15:09.903 }, 00:15:09.903 { 00:15:09.903 "name": null, 00:15:09.903 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.903 "is_configured": false, 00:15:09.903 "data_offset": 2048, 00:15:09.903 "data_size": 63488 00:15:09.903 }, 00:15:09.903 { 00:15:09.903 "name": null, 00:15:09.903 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.903 "is_configured": false, 00:15:09.903 "data_offset": 2048, 00:15:09.903 "data_size": 63488 00:15:09.903 } 00:15:09.903 ] 00:15:09.903 }' 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.903 02:48:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.162 [2024-12-07 02:48:21.185619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:10.162 [2024-12-07 02:48:21.185663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.162 [2024-12-07 02:48:21.185677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:10.162 [2024-12-07 02:48:21.185686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.162 [2024-12-07 02:48:21.185994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.162 [2024-12-07 02:48:21.186013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:10.162 [2024-12-07 02:48:21.186064] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:10.162 [2024-12-07 02:48:21.186083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.162 pt2 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.162 [2024-12-07 02:48:21.197566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:10.162 [2024-12-07 02:48:21.197625] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.162 [2024-12-07 02:48:21.197640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:10.162 [2024-12-07 02:48:21.197650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.162 [2024-12-07 02:48:21.197932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.162 [2024-12-07 02:48:21.197951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:10.162 [2024-12-07 02:48:21.197998] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:10.162 [2024-12-07 02:48:21.198015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:10.162 pt3 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.162 [2024-12-07 02:48:21.209551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:10.162 [2024-12-07 02:48:21.209628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.162 [2024-12-07 02:48:21.209643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:10.162 [2024-12-07 02:48:21.209652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.162 [2024-12-07 02:48:21.209928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.162 [2024-12-07 02:48:21.209946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:10.162 [2024-12-07 02:48:21.209990] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:10.162 [2024-12-07 02:48:21.210013] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:10.162 [2024-12-07 02:48:21.210106] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:10.162 [2024-12-07 02:48:21.210122] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:10.162 [2024-12-07 02:48:21.210332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:10.162 [2024-12-07 02:48:21.210804] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:10.162 [2024-12-07 02:48:21.210857] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:15:10.162 [2024-12-07 02:48:21.210953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.162 pt4 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.162 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.163 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.163 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.163 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.163 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.163 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.163 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.421 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.421 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.421 "name": "raid_bdev1", 00:15:10.421 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:10.421 "strip_size_kb": 64, 00:15:10.421 "state": "online", 00:15:10.421 "raid_level": "raid5f", 00:15:10.421 "superblock": true, 00:15:10.421 "num_base_bdevs": 4, 00:15:10.421 "num_base_bdevs_discovered": 4, 00:15:10.421 "num_base_bdevs_operational": 4, 00:15:10.421 "base_bdevs_list": [ 00:15:10.421 { 00:15:10.421 "name": "pt1", 00:15:10.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.421 "is_configured": true, 00:15:10.421 "data_offset": 2048, 00:15:10.421 "data_size": 63488 00:15:10.421 }, 00:15:10.421 { 00:15:10.421 "name": "pt2", 00:15:10.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.421 "is_configured": true, 00:15:10.421 "data_offset": 2048, 00:15:10.421 "data_size": 63488 00:15:10.421 }, 00:15:10.421 { 00:15:10.421 "name": "pt3", 00:15:10.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.421 "is_configured": true, 00:15:10.421 "data_offset": 2048, 00:15:10.421 "data_size": 63488 00:15:10.421 }, 00:15:10.421 { 00:15:10.421 "name": "pt4", 00:15:10.421 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.421 "is_configured": true, 00:15:10.421 "data_offset": 2048, 00:15:10.421 "data_size": 63488 00:15:10.421 } 00:15:10.421 ] 00:15:10.421 }' 00:15:10.421 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.421 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.679 [2024-12-07 02:48:21.660947] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:10.679 "name": "raid_bdev1", 00:15:10.679 "aliases": [ 00:15:10.679 "b900a5c7-e69a-4149-9119-9c12f20a5464" 00:15:10.679 ], 00:15:10.679 "product_name": "Raid Volume", 00:15:10.679 "block_size": 512, 00:15:10.679 "num_blocks": 190464, 00:15:10.679 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:10.679 "assigned_rate_limits": { 00:15:10.679 "rw_ios_per_sec": 0, 00:15:10.679 "rw_mbytes_per_sec": 0, 00:15:10.679 "r_mbytes_per_sec": 0, 00:15:10.679 "w_mbytes_per_sec": 0 00:15:10.679 }, 00:15:10.679 "claimed": false, 00:15:10.679 "zoned": false, 00:15:10.679 "supported_io_types": { 00:15:10.679 "read": true, 00:15:10.679 "write": true, 00:15:10.679 "unmap": false, 00:15:10.679 "flush": false, 00:15:10.679 "reset": true, 00:15:10.679 "nvme_admin": false, 00:15:10.679 "nvme_io": false, 00:15:10.679 "nvme_io_md": false, 00:15:10.679 "write_zeroes": true, 00:15:10.679 "zcopy": false, 00:15:10.679 "get_zone_info": false, 00:15:10.679 "zone_management": false, 00:15:10.679 "zone_append": false, 00:15:10.679 "compare": false, 00:15:10.679 "compare_and_write": false, 00:15:10.679 "abort": false, 00:15:10.679 "seek_hole": false, 00:15:10.679 "seek_data": false, 00:15:10.679 "copy": false, 00:15:10.679 "nvme_iov_md": false 00:15:10.679 }, 00:15:10.679 "driver_specific": { 00:15:10.679 "raid": { 00:15:10.679 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:10.679 "strip_size_kb": 64, 00:15:10.679 "state": "online", 00:15:10.679 "raid_level": "raid5f", 00:15:10.679 "superblock": true, 00:15:10.679 "num_base_bdevs": 4, 00:15:10.679 "num_base_bdevs_discovered": 4, 00:15:10.679 "num_base_bdevs_operational": 4, 00:15:10.679 "base_bdevs_list": [ 00:15:10.679 { 00:15:10.679 "name": "pt1", 00:15:10.679 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.679 "is_configured": true, 00:15:10.679 "data_offset": 2048, 00:15:10.679 "data_size": 63488 00:15:10.679 }, 00:15:10.679 { 00:15:10.679 "name": "pt2", 00:15:10.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.679 "is_configured": true, 00:15:10.679 "data_offset": 2048, 00:15:10.679 "data_size": 63488 00:15:10.679 }, 00:15:10.679 { 00:15:10.679 "name": "pt3", 00:15:10.679 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.679 "is_configured": true, 00:15:10.679 "data_offset": 2048, 00:15:10.679 "data_size": 63488 00:15:10.679 }, 00:15:10.679 { 00:15:10.679 "name": "pt4", 00:15:10.679 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.679 "is_configured": true, 00:15:10.679 "data_offset": 2048, 00:15:10.679 "data_size": 63488 00:15:10.679 } 00:15:10.679 ] 00:15:10.679 } 00:15:10.679 } 00:15:10.679 }' 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:10.679 pt2 00:15:10.679 pt3 00:15:10.679 pt4' 00:15:10.679 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:10.937 [2024-12-07 02:48:21.980391] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.937 02:48:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b900a5c7-e69a-4149-9119-9c12f20a5464 '!=' b900a5c7-e69a-4149-9119-9c12f20a5464 ']' 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.196 [2024-12-07 02:48:22.028177] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.196 "name": "raid_bdev1", 00:15:11.196 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:11.196 "strip_size_kb": 64, 00:15:11.196 "state": "online", 00:15:11.196 "raid_level": "raid5f", 00:15:11.196 "superblock": true, 00:15:11.196 "num_base_bdevs": 4, 00:15:11.196 "num_base_bdevs_discovered": 3, 00:15:11.196 "num_base_bdevs_operational": 3, 00:15:11.196 "base_bdevs_list": [ 00:15:11.196 { 00:15:11.196 "name": null, 00:15:11.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.196 "is_configured": false, 00:15:11.196 "data_offset": 0, 00:15:11.196 "data_size": 63488 00:15:11.196 }, 00:15:11.196 { 00:15:11.196 "name": "pt2", 00:15:11.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.196 "is_configured": true, 00:15:11.196 "data_offset": 2048, 00:15:11.196 "data_size": 63488 00:15:11.196 }, 00:15:11.196 { 00:15:11.196 "name": "pt3", 00:15:11.196 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.196 "is_configured": true, 00:15:11.196 "data_offset": 2048, 00:15:11.196 "data_size": 63488 00:15:11.196 }, 00:15:11.196 { 00:15:11.196 "name": "pt4", 00:15:11.196 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:11.196 "is_configured": true, 00:15:11.196 "data_offset": 2048, 00:15:11.196 "data_size": 63488 00:15:11.196 } 00:15:11.196 ] 00:15:11.196 }' 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.196 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.455 [2024-12-07 02:48:22.455467] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.455 [2024-12-07 02:48:22.455530] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.455 [2024-12-07 02:48:22.455618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.455 [2024-12-07 02:48:22.455720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.455 [2024-12-07 02:48:22.455766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.455 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.713 [2024-12-07 02:48:22.551308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:11.713 [2024-12-07 02:48:22.551396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.713 [2024-12-07 02:48:22.551417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:11.713 [2024-12-07 02:48:22.551428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.713 [2024-12-07 02:48:22.553387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.713 [2024-12-07 02:48:22.553426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:11.713 [2024-12-07 02:48:22.553482] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:11.713 [2024-12-07 02:48:22.553512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:11.713 pt2 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.713 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.714 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.714 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.714 "name": "raid_bdev1", 00:15:11.714 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:11.714 "strip_size_kb": 64, 00:15:11.714 "state": "configuring", 00:15:11.714 "raid_level": "raid5f", 00:15:11.714 "superblock": true, 00:15:11.714 "num_base_bdevs": 4, 00:15:11.714 "num_base_bdevs_discovered": 1, 00:15:11.714 "num_base_bdevs_operational": 3, 00:15:11.714 "base_bdevs_list": [ 00:15:11.714 { 00:15:11.714 "name": null, 00:15:11.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.714 "is_configured": false, 00:15:11.714 "data_offset": 2048, 00:15:11.714 "data_size": 63488 00:15:11.714 }, 00:15:11.714 { 00:15:11.714 "name": "pt2", 00:15:11.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.714 "is_configured": true, 00:15:11.714 "data_offset": 2048, 00:15:11.714 "data_size": 63488 00:15:11.714 }, 00:15:11.714 { 00:15:11.714 "name": null, 00:15:11.714 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.714 "is_configured": false, 00:15:11.714 "data_offset": 2048, 00:15:11.714 "data_size": 63488 00:15:11.714 }, 00:15:11.714 { 00:15:11.714 "name": null, 00:15:11.714 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:11.714 "is_configured": false, 00:15:11.714 "data_offset": 2048, 00:15:11.714 "data_size": 63488 00:15:11.714 } 00:15:11.714 ] 00:15:11.714 }' 00:15:11.714 02:48:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.714 02:48:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.972 [2024-12-07 02:48:23.026519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:11.972 [2024-12-07 02:48:23.026611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.972 [2024-12-07 02:48:23.026644] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:11.972 [2024-12-07 02:48:23.026675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.972 [2024-12-07 02:48:23.027012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.972 [2024-12-07 02:48:23.027076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:11.972 [2024-12-07 02:48:23.027160] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:11.972 [2024-12-07 02:48:23.027218] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:11.972 pt3 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.972 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.231 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.231 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.231 "name": "raid_bdev1", 00:15:12.231 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:12.231 "strip_size_kb": 64, 00:15:12.231 "state": "configuring", 00:15:12.231 "raid_level": "raid5f", 00:15:12.231 "superblock": true, 00:15:12.231 "num_base_bdevs": 4, 00:15:12.231 "num_base_bdevs_discovered": 2, 00:15:12.231 "num_base_bdevs_operational": 3, 00:15:12.231 "base_bdevs_list": [ 00:15:12.231 { 00:15:12.231 "name": null, 00:15:12.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.231 "is_configured": false, 00:15:12.231 "data_offset": 2048, 00:15:12.231 "data_size": 63488 00:15:12.231 }, 00:15:12.231 { 00:15:12.231 "name": "pt2", 00:15:12.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.231 "is_configured": true, 00:15:12.231 "data_offset": 2048, 00:15:12.231 "data_size": 63488 00:15:12.231 }, 00:15:12.231 { 00:15:12.231 "name": "pt3", 00:15:12.231 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.231 "is_configured": true, 00:15:12.231 "data_offset": 2048, 00:15:12.231 "data_size": 63488 00:15:12.231 }, 00:15:12.231 { 00:15:12.231 "name": null, 00:15:12.231 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:12.231 "is_configured": false, 00:15:12.231 "data_offset": 2048, 00:15:12.231 "data_size": 63488 00:15:12.231 } 00:15:12.231 ] 00:15:12.231 }' 00:15:12.231 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.231 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.490 [2024-12-07 02:48:23.429793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:12.490 [2024-12-07 02:48:23.429885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.490 [2024-12-07 02:48:23.429922] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:12.490 [2024-12-07 02:48:23.429934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.490 [2024-12-07 02:48:23.430241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.490 [2024-12-07 02:48:23.430261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:12.490 [2024-12-07 02:48:23.430317] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:12.490 [2024-12-07 02:48:23.430336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:12.490 [2024-12-07 02:48:23.430419] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:15:12.490 [2024-12-07 02:48:23.430430] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:12.490 [2024-12-07 02:48:23.430653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:12.490 [2024-12-07 02:48:23.431152] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:15:12.490 [2024-12-07 02:48:23.431164] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:15:12.490 [2024-12-07 02:48:23.431359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.490 pt4 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.490 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.491 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.491 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.491 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.491 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.491 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.491 "name": "raid_bdev1", 00:15:12.491 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:12.491 "strip_size_kb": 64, 00:15:12.491 "state": "online", 00:15:12.491 "raid_level": "raid5f", 00:15:12.491 "superblock": true, 00:15:12.491 "num_base_bdevs": 4, 00:15:12.491 "num_base_bdevs_discovered": 3, 00:15:12.491 "num_base_bdevs_operational": 3, 00:15:12.491 "base_bdevs_list": [ 00:15:12.491 { 00:15:12.491 "name": null, 00:15:12.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.491 "is_configured": false, 00:15:12.491 "data_offset": 2048, 00:15:12.491 "data_size": 63488 00:15:12.491 }, 00:15:12.491 { 00:15:12.491 "name": "pt2", 00:15:12.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.491 "is_configured": true, 00:15:12.491 "data_offset": 2048, 00:15:12.491 "data_size": 63488 00:15:12.491 }, 00:15:12.491 { 00:15:12.491 "name": "pt3", 00:15:12.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.491 "is_configured": true, 00:15:12.491 "data_offset": 2048, 00:15:12.491 "data_size": 63488 00:15:12.491 }, 00:15:12.491 { 00:15:12.491 "name": "pt4", 00:15:12.491 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:12.491 "is_configured": true, 00:15:12.491 "data_offset": 2048, 00:15:12.491 "data_size": 63488 00:15:12.491 } 00:15:12.491 ] 00:15:12.491 }' 00:15:12.491 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.491 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.057 [2024-12-07 02:48:23.833138] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.057 [2024-12-07 02:48:23.833206] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.057 [2024-12-07 02:48:23.833275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.057 [2024-12-07 02:48:23.833355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.057 [2024-12-07 02:48:23.833405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.057 [2024-12-07 02:48:23.905036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:13.057 [2024-12-07 02:48:23.905126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.057 [2024-12-07 02:48:23.905165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:13.057 [2024-12-07 02:48:23.905193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.057 [2024-12-07 02:48:23.907382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.057 [2024-12-07 02:48:23.907453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:13.057 [2024-12-07 02:48:23.907529] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:13.057 [2024-12-07 02:48:23.907597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:13.057 [2024-12-07 02:48:23.907729] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:13.057 [2024-12-07 02:48:23.907791] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.057 [2024-12-07 02:48:23.907853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:15:13.057 [2024-12-07 02:48:23.907913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:13.057 [2024-12-07 02:48:23.908071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:13.057 pt1 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.057 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.057 "name": "raid_bdev1", 00:15:13.057 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:13.057 "strip_size_kb": 64, 00:15:13.057 "state": "configuring", 00:15:13.057 "raid_level": "raid5f", 00:15:13.057 "superblock": true, 00:15:13.057 "num_base_bdevs": 4, 00:15:13.057 "num_base_bdevs_discovered": 2, 00:15:13.057 "num_base_bdevs_operational": 3, 00:15:13.057 "base_bdevs_list": [ 00:15:13.057 { 00:15:13.057 "name": null, 00:15:13.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.057 "is_configured": false, 00:15:13.057 "data_offset": 2048, 00:15:13.057 "data_size": 63488 00:15:13.057 }, 00:15:13.057 { 00:15:13.057 "name": "pt2", 00:15:13.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.057 "is_configured": true, 00:15:13.057 "data_offset": 2048, 00:15:13.057 "data_size": 63488 00:15:13.057 }, 00:15:13.057 { 00:15:13.057 "name": "pt3", 00:15:13.057 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.057 "is_configured": true, 00:15:13.057 "data_offset": 2048, 00:15:13.057 "data_size": 63488 00:15:13.057 }, 00:15:13.057 { 00:15:13.057 "name": null, 00:15:13.057 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:13.057 "is_configured": false, 00:15:13.057 "data_offset": 2048, 00:15:13.058 "data_size": 63488 00:15:13.058 } 00:15:13.058 ] 00:15:13.058 }' 00:15:13.058 02:48:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.058 02:48:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.316 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:13.316 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.316 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.316 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:13.316 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.316 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:13.316 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:13.316 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.316 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.316 [2024-12-07 02:48:24.376237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:13.316 [2024-12-07 02:48:24.376323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.316 [2024-12-07 02:48:24.376355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:13.316 [2024-12-07 02:48:24.376386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.316 [2024-12-07 02:48:24.376731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.316 [2024-12-07 02:48:24.376789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:13.316 [2024-12-07 02:48:24.376866] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:13.316 [2024-12-07 02:48:24.376913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:13.316 [2024-12-07 02:48:24.377015] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:13.316 [2024-12-07 02:48:24.377055] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:13.316 [2024-12-07 02:48:24.377282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:13.316 [2024-12-07 02:48:24.377816] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:13.316 [2024-12-07 02:48:24.377865] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:13.317 [2024-12-07 02:48:24.378063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.317 pt4 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.317 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.576 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.576 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.576 "name": "raid_bdev1", 00:15:13.576 "uuid": "b900a5c7-e69a-4149-9119-9c12f20a5464", 00:15:13.576 "strip_size_kb": 64, 00:15:13.576 "state": "online", 00:15:13.576 "raid_level": "raid5f", 00:15:13.576 "superblock": true, 00:15:13.576 "num_base_bdevs": 4, 00:15:13.576 "num_base_bdevs_discovered": 3, 00:15:13.576 "num_base_bdevs_operational": 3, 00:15:13.576 "base_bdevs_list": [ 00:15:13.576 { 00:15:13.576 "name": null, 00:15:13.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.576 "is_configured": false, 00:15:13.576 "data_offset": 2048, 00:15:13.576 "data_size": 63488 00:15:13.576 }, 00:15:13.576 { 00:15:13.576 "name": "pt2", 00:15:13.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.576 "is_configured": true, 00:15:13.576 "data_offset": 2048, 00:15:13.576 "data_size": 63488 00:15:13.576 }, 00:15:13.576 { 00:15:13.576 "name": "pt3", 00:15:13.576 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.576 "is_configured": true, 00:15:13.576 "data_offset": 2048, 00:15:13.576 "data_size": 63488 00:15:13.576 }, 00:15:13.576 { 00:15:13.576 "name": "pt4", 00:15:13.576 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:13.576 "is_configured": true, 00:15:13.576 "data_offset": 2048, 00:15:13.576 "data_size": 63488 00:15:13.576 } 00:15:13.576 ] 00:15:13.576 }' 00:15:13.576 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.576 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.836 [2024-12-07 02:48:24.872563] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b900a5c7-e69a-4149-9119-9c12f20a5464 '!=' b900a5c7-e69a-4149-9119-9c12f20a5464 ']' 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94814 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94814 ']' 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94814 00:15:13.836 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:14.096 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.096 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94814 00:15:14.096 killing process with pid 94814 00:15:14.096 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:14.096 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:14.096 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94814' 00:15:14.096 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94814 00:15:14.096 [2024-12-07 02:48:24.951890] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.096 [2024-12-07 02:48:24.952009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.096 02:48:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94814 00:15:14.096 [2024-12-07 02:48:24.952101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.096 [2024-12-07 02:48:24.952113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:14.096 [2024-12-07 02:48:25.031626] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:14.355 02:48:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:14.355 00:15:14.355 real 0m7.224s 00:15:14.355 user 0m11.945s 00:15:14.355 sys 0m1.611s 00:15:14.355 02:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:14.355 02:48:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.355 ************************************ 00:15:14.355 END TEST raid5f_superblock_test 00:15:14.355 ************************************ 00:15:14.615 02:48:25 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:14.615 02:48:25 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:14.615 02:48:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:14.615 02:48:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:14.615 02:48:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:14.615 ************************************ 00:15:14.615 START TEST raid5f_rebuild_test 00:15:14.615 ************************************ 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=95288 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 95288 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 95288 ']' 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.615 02:48:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.615 [2024-12-07 02:48:25.601216] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:14.615 [2024-12-07 02:48:25.601530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95288 ] 00:15:14.615 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:14.615 Zero copy mechanism will not be used. 00:15:14.875 [2024-12-07 02:48:25.765707] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.875 [2024-12-07 02:48:25.835598] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.875 [2024-12-07 02:48:25.912721] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.875 [2024-12-07 02:48:25.912767] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:15.443 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.443 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.444 BaseBdev1_malloc 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.444 [2024-12-07 02:48:26.435751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:15.444 [2024-12-07 02:48:26.435901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.444 [2024-12-07 02:48:26.435935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:15.444 [2024-12-07 02:48:26.435960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.444 [2024-12-07 02:48:26.438386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.444 [2024-12-07 02:48:26.438424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:15.444 BaseBdev1 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.444 BaseBdev2_malloc 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.444 [2024-12-07 02:48:26.487746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:15.444 [2024-12-07 02:48:26.487969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.444 [2024-12-07 02:48:26.488026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:15.444 [2024-12-07 02:48:26.488049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.444 [2024-12-07 02:48:26.492715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.444 [2024-12-07 02:48:26.492764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:15.444 BaseBdev2 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.444 BaseBdev3_malloc 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.444 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.703 [2024-12-07 02:48:26.524982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:15.703 [2024-12-07 02:48:26.525031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.703 [2024-12-07 02:48:26.525059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:15.703 [2024-12-07 02:48:26.525069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.703 [2024-12-07 02:48:26.527418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.703 [2024-12-07 02:48:26.527461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:15.703 BaseBdev3 00:15:15.703 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.703 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:15.703 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:15.703 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.703 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.703 BaseBdev4_malloc 00:15:15.703 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.703 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:15.703 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.703 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.703 [2024-12-07 02:48:26.559667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:15.703 [2024-12-07 02:48:26.559719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.703 [2024-12-07 02:48:26.559744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:15.703 [2024-12-07 02:48:26.559753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.703 [2024-12-07 02:48:26.562030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.703 [2024-12-07 02:48:26.562063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:15.703 BaseBdev4 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.704 spare_malloc 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.704 spare_delay 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.704 [2024-12-07 02:48:26.606147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:15.704 [2024-12-07 02:48:26.606196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.704 [2024-12-07 02:48:26.606216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:15.704 [2024-12-07 02:48:26.606226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.704 [2024-12-07 02:48:26.608636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.704 [2024-12-07 02:48:26.608706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:15.704 spare 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.704 [2024-12-07 02:48:26.618226] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.704 [2024-12-07 02:48:26.620323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.704 [2024-12-07 02:48:26.620387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.704 [2024-12-07 02:48:26.620425] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:15.704 [2024-12-07 02:48:26.620508] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:15.704 [2024-12-07 02:48:26.620518] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:15.704 [2024-12-07 02:48:26.620774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:15.704 [2024-12-07 02:48:26.621255] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:15.704 [2024-12-07 02:48:26.621276] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:15.704 [2024-12-07 02:48:26.621400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.704 "name": "raid_bdev1", 00:15:15.704 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:15.704 "strip_size_kb": 64, 00:15:15.704 "state": "online", 00:15:15.704 "raid_level": "raid5f", 00:15:15.704 "superblock": false, 00:15:15.704 "num_base_bdevs": 4, 00:15:15.704 "num_base_bdevs_discovered": 4, 00:15:15.704 "num_base_bdevs_operational": 4, 00:15:15.704 "base_bdevs_list": [ 00:15:15.704 { 00:15:15.704 "name": "BaseBdev1", 00:15:15.704 "uuid": "f26a6e78-1b27-546c-a8b1-b6eced27730a", 00:15:15.704 "is_configured": true, 00:15:15.704 "data_offset": 0, 00:15:15.704 "data_size": 65536 00:15:15.704 }, 00:15:15.704 { 00:15:15.704 "name": "BaseBdev2", 00:15:15.704 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:15.704 "is_configured": true, 00:15:15.704 "data_offset": 0, 00:15:15.704 "data_size": 65536 00:15:15.704 }, 00:15:15.704 { 00:15:15.704 "name": "BaseBdev3", 00:15:15.704 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:15.704 "is_configured": true, 00:15:15.704 "data_offset": 0, 00:15:15.704 "data_size": 65536 00:15:15.704 }, 00:15:15.704 { 00:15:15.704 "name": "BaseBdev4", 00:15:15.704 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:15.704 "is_configured": true, 00:15:15.704 "data_offset": 0, 00:15:15.704 "data_size": 65536 00:15:15.704 } 00:15:15.704 ] 00:15:15.704 }' 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.704 02:48:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.273 [2024-12-07 02:48:27.079831] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:16.273 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:16.273 [2024-12-07 02:48:27.347296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:16.533 /dev/nbd0 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:16.533 1+0 records in 00:15:16.533 1+0 records out 00:15:16.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512238 s, 8.0 MB/s 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:16.533 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:16.793 512+0 records in 00:15:16.793 512+0 records out 00:15:16.793 100663296 bytes (101 MB, 96 MiB) copied, 0.391694 s, 257 MB/s 00:15:16.793 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:16.793 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:16.793 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:16.793 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:16.793 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:16.793 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:16.793 02:48:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:17.053 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.053 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.054 [2024-12-07 02:48:28.038415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.054 [2024-12-07 02:48:28.050467] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.054 "name": "raid_bdev1", 00:15:17.054 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:17.054 "strip_size_kb": 64, 00:15:17.054 "state": "online", 00:15:17.054 "raid_level": "raid5f", 00:15:17.054 "superblock": false, 00:15:17.054 "num_base_bdevs": 4, 00:15:17.054 "num_base_bdevs_discovered": 3, 00:15:17.054 "num_base_bdevs_operational": 3, 00:15:17.054 "base_bdevs_list": [ 00:15:17.054 { 00:15:17.054 "name": null, 00:15:17.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.054 "is_configured": false, 00:15:17.054 "data_offset": 0, 00:15:17.054 "data_size": 65536 00:15:17.054 }, 00:15:17.054 { 00:15:17.054 "name": "BaseBdev2", 00:15:17.054 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:17.054 "is_configured": true, 00:15:17.054 "data_offset": 0, 00:15:17.054 "data_size": 65536 00:15:17.054 }, 00:15:17.054 { 00:15:17.054 "name": "BaseBdev3", 00:15:17.054 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:17.054 "is_configured": true, 00:15:17.054 "data_offset": 0, 00:15:17.054 "data_size": 65536 00:15:17.054 }, 00:15:17.054 { 00:15:17.054 "name": "BaseBdev4", 00:15:17.054 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:17.054 "is_configured": true, 00:15:17.054 "data_offset": 0, 00:15:17.054 "data_size": 65536 00:15:17.054 } 00:15:17.054 ] 00:15:17.054 }' 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.054 02:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.623 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:17.623 02:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.623 02:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.623 [2024-12-07 02:48:28.521658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.623 [2024-12-07 02:48:28.525174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:15:17.623 [2024-12-07 02:48:28.527402] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:17.623 02:48:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.623 02:48:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.564 "name": "raid_bdev1", 00:15:18.564 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:18.564 "strip_size_kb": 64, 00:15:18.564 "state": "online", 00:15:18.564 "raid_level": "raid5f", 00:15:18.564 "superblock": false, 00:15:18.564 "num_base_bdevs": 4, 00:15:18.564 "num_base_bdevs_discovered": 4, 00:15:18.564 "num_base_bdevs_operational": 4, 00:15:18.564 "process": { 00:15:18.564 "type": "rebuild", 00:15:18.564 "target": "spare", 00:15:18.564 "progress": { 00:15:18.564 "blocks": 19200, 00:15:18.564 "percent": 9 00:15:18.564 } 00:15:18.564 }, 00:15:18.564 "base_bdevs_list": [ 00:15:18.564 { 00:15:18.564 "name": "spare", 00:15:18.564 "uuid": "9d68e8f4-3d64-5ff1-b295-0122b0bb898c", 00:15:18.564 "is_configured": true, 00:15:18.564 "data_offset": 0, 00:15:18.564 "data_size": 65536 00:15:18.564 }, 00:15:18.564 { 00:15:18.564 "name": "BaseBdev2", 00:15:18.564 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:18.564 "is_configured": true, 00:15:18.564 "data_offset": 0, 00:15:18.564 "data_size": 65536 00:15:18.564 }, 00:15:18.564 { 00:15:18.564 "name": "BaseBdev3", 00:15:18.564 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:18.564 "is_configured": true, 00:15:18.564 "data_offset": 0, 00:15:18.564 "data_size": 65536 00:15:18.564 }, 00:15:18.564 { 00:15:18.564 "name": "BaseBdev4", 00:15:18.564 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:18.564 "is_configured": true, 00:15:18.564 "data_offset": 0, 00:15:18.564 "data_size": 65536 00:15:18.564 } 00:15:18.564 ] 00:15:18.564 }' 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.564 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.832 [2024-12-07 02:48:29.689934] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.832 [2024-12-07 02:48:29.732676] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:18.832 [2024-12-07 02:48:29.732776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.832 [2024-12-07 02:48:29.732817] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.832 [2024-12-07 02:48:29.732837] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.832 "name": "raid_bdev1", 00:15:18.832 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:18.832 "strip_size_kb": 64, 00:15:18.832 "state": "online", 00:15:18.832 "raid_level": "raid5f", 00:15:18.832 "superblock": false, 00:15:18.832 "num_base_bdevs": 4, 00:15:18.832 "num_base_bdevs_discovered": 3, 00:15:18.832 "num_base_bdevs_operational": 3, 00:15:18.832 "base_bdevs_list": [ 00:15:18.832 { 00:15:18.832 "name": null, 00:15:18.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.832 "is_configured": false, 00:15:18.832 "data_offset": 0, 00:15:18.832 "data_size": 65536 00:15:18.832 }, 00:15:18.832 { 00:15:18.832 "name": "BaseBdev2", 00:15:18.832 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:18.832 "is_configured": true, 00:15:18.832 "data_offset": 0, 00:15:18.832 "data_size": 65536 00:15:18.832 }, 00:15:18.832 { 00:15:18.832 "name": "BaseBdev3", 00:15:18.832 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:18.832 "is_configured": true, 00:15:18.832 "data_offset": 0, 00:15:18.832 "data_size": 65536 00:15:18.832 }, 00:15:18.832 { 00:15:18.832 "name": "BaseBdev4", 00:15:18.832 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:18.832 "is_configured": true, 00:15:18.832 "data_offset": 0, 00:15:18.832 "data_size": 65536 00:15:18.832 } 00:15:18.832 ] 00:15:18.832 }' 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.832 02:48:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.135 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.135 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.135 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.135 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.135 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.135 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.135 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.135 02:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.135 02:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.135 02:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.411 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.411 "name": "raid_bdev1", 00:15:19.411 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:19.411 "strip_size_kb": 64, 00:15:19.411 "state": "online", 00:15:19.411 "raid_level": "raid5f", 00:15:19.411 "superblock": false, 00:15:19.411 "num_base_bdevs": 4, 00:15:19.411 "num_base_bdevs_discovered": 3, 00:15:19.411 "num_base_bdevs_operational": 3, 00:15:19.411 "base_bdevs_list": [ 00:15:19.411 { 00:15:19.411 "name": null, 00:15:19.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.411 "is_configured": false, 00:15:19.411 "data_offset": 0, 00:15:19.411 "data_size": 65536 00:15:19.411 }, 00:15:19.411 { 00:15:19.411 "name": "BaseBdev2", 00:15:19.411 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:19.411 "is_configured": true, 00:15:19.411 "data_offset": 0, 00:15:19.411 "data_size": 65536 00:15:19.411 }, 00:15:19.411 { 00:15:19.411 "name": "BaseBdev3", 00:15:19.411 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:19.411 "is_configured": true, 00:15:19.411 "data_offset": 0, 00:15:19.411 "data_size": 65536 00:15:19.411 }, 00:15:19.411 { 00:15:19.412 "name": "BaseBdev4", 00:15:19.412 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:19.412 "is_configured": true, 00:15:19.412 "data_offset": 0, 00:15:19.412 "data_size": 65536 00:15:19.412 } 00:15:19.412 ] 00:15:19.412 }' 00:15:19.412 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.412 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.412 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.412 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.412 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:19.412 02:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.412 02:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.412 [2024-12-07 02:48:30.316938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:19.412 [2024-12-07 02:48:30.319854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:19.412 [2024-12-07 02:48:30.321982] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:19.412 02:48:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.412 02:48:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:20.351 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.351 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.351 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.351 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.351 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.351 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.351 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.351 02:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.351 02:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.351 02:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.351 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.351 "name": "raid_bdev1", 00:15:20.351 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:20.351 "strip_size_kb": 64, 00:15:20.351 "state": "online", 00:15:20.351 "raid_level": "raid5f", 00:15:20.351 "superblock": false, 00:15:20.351 "num_base_bdevs": 4, 00:15:20.351 "num_base_bdevs_discovered": 4, 00:15:20.351 "num_base_bdevs_operational": 4, 00:15:20.351 "process": { 00:15:20.351 "type": "rebuild", 00:15:20.351 "target": "spare", 00:15:20.351 "progress": { 00:15:20.351 "blocks": 19200, 00:15:20.351 "percent": 9 00:15:20.351 } 00:15:20.351 }, 00:15:20.351 "base_bdevs_list": [ 00:15:20.351 { 00:15:20.351 "name": "spare", 00:15:20.351 "uuid": "9d68e8f4-3d64-5ff1-b295-0122b0bb898c", 00:15:20.351 "is_configured": true, 00:15:20.351 "data_offset": 0, 00:15:20.351 "data_size": 65536 00:15:20.351 }, 00:15:20.351 { 00:15:20.351 "name": "BaseBdev2", 00:15:20.351 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:20.351 "is_configured": true, 00:15:20.351 "data_offset": 0, 00:15:20.351 "data_size": 65536 00:15:20.351 }, 00:15:20.351 { 00:15:20.351 "name": "BaseBdev3", 00:15:20.351 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:20.351 "is_configured": true, 00:15:20.351 "data_offset": 0, 00:15:20.351 "data_size": 65536 00:15:20.351 }, 00:15:20.351 { 00:15:20.351 "name": "BaseBdev4", 00:15:20.351 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:20.351 "is_configured": true, 00:15:20.351 "data_offset": 0, 00:15:20.351 "data_size": 65536 00:15:20.351 } 00:15:20.351 ] 00:15:20.351 }' 00:15:20.351 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=524 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.611 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.611 "name": "raid_bdev1", 00:15:20.611 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:20.611 "strip_size_kb": 64, 00:15:20.611 "state": "online", 00:15:20.611 "raid_level": "raid5f", 00:15:20.611 "superblock": false, 00:15:20.611 "num_base_bdevs": 4, 00:15:20.611 "num_base_bdevs_discovered": 4, 00:15:20.611 "num_base_bdevs_operational": 4, 00:15:20.611 "process": { 00:15:20.611 "type": "rebuild", 00:15:20.611 "target": "spare", 00:15:20.611 "progress": { 00:15:20.611 "blocks": 21120, 00:15:20.611 "percent": 10 00:15:20.611 } 00:15:20.611 }, 00:15:20.611 "base_bdevs_list": [ 00:15:20.611 { 00:15:20.612 "name": "spare", 00:15:20.612 "uuid": "9d68e8f4-3d64-5ff1-b295-0122b0bb898c", 00:15:20.612 "is_configured": true, 00:15:20.612 "data_offset": 0, 00:15:20.612 "data_size": 65536 00:15:20.612 }, 00:15:20.612 { 00:15:20.612 "name": "BaseBdev2", 00:15:20.612 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:20.612 "is_configured": true, 00:15:20.612 "data_offset": 0, 00:15:20.612 "data_size": 65536 00:15:20.612 }, 00:15:20.612 { 00:15:20.612 "name": "BaseBdev3", 00:15:20.612 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:20.612 "is_configured": true, 00:15:20.612 "data_offset": 0, 00:15:20.612 "data_size": 65536 00:15:20.612 }, 00:15:20.612 { 00:15:20.612 "name": "BaseBdev4", 00:15:20.612 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:20.612 "is_configured": true, 00:15:20.612 "data_offset": 0, 00:15:20.612 "data_size": 65536 00:15:20.612 } 00:15:20.612 ] 00:15:20.612 }' 00:15:20.612 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.612 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.612 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.612 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.612 02:48:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.993 "name": "raid_bdev1", 00:15:21.993 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:21.993 "strip_size_kb": 64, 00:15:21.993 "state": "online", 00:15:21.993 "raid_level": "raid5f", 00:15:21.993 "superblock": false, 00:15:21.993 "num_base_bdevs": 4, 00:15:21.993 "num_base_bdevs_discovered": 4, 00:15:21.993 "num_base_bdevs_operational": 4, 00:15:21.993 "process": { 00:15:21.993 "type": "rebuild", 00:15:21.993 "target": "spare", 00:15:21.993 "progress": { 00:15:21.993 "blocks": 44160, 00:15:21.993 "percent": 22 00:15:21.993 } 00:15:21.993 }, 00:15:21.993 "base_bdevs_list": [ 00:15:21.993 { 00:15:21.993 "name": "spare", 00:15:21.993 "uuid": "9d68e8f4-3d64-5ff1-b295-0122b0bb898c", 00:15:21.993 "is_configured": true, 00:15:21.993 "data_offset": 0, 00:15:21.993 "data_size": 65536 00:15:21.993 }, 00:15:21.993 { 00:15:21.993 "name": "BaseBdev2", 00:15:21.993 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:21.993 "is_configured": true, 00:15:21.993 "data_offset": 0, 00:15:21.993 "data_size": 65536 00:15:21.993 }, 00:15:21.993 { 00:15:21.993 "name": "BaseBdev3", 00:15:21.993 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:21.993 "is_configured": true, 00:15:21.993 "data_offset": 0, 00:15:21.993 "data_size": 65536 00:15:21.993 }, 00:15:21.993 { 00:15:21.993 "name": "BaseBdev4", 00:15:21.993 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:21.993 "is_configured": true, 00:15:21.993 "data_offset": 0, 00:15:21.993 "data_size": 65536 00:15:21.993 } 00:15:21.993 ] 00:15:21.993 }' 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.993 02:48:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:22.934 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:22.934 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.934 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.934 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.934 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.934 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.934 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.934 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.934 02:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.934 02:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.934 02:48:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.934 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.934 "name": "raid_bdev1", 00:15:22.934 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:22.934 "strip_size_kb": 64, 00:15:22.934 "state": "online", 00:15:22.934 "raid_level": "raid5f", 00:15:22.934 "superblock": false, 00:15:22.934 "num_base_bdevs": 4, 00:15:22.934 "num_base_bdevs_discovered": 4, 00:15:22.934 "num_base_bdevs_operational": 4, 00:15:22.934 "process": { 00:15:22.934 "type": "rebuild", 00:15:22.934 "target": "spare", 00:15:22.934 "progress": { 00:15:22.935 "blocks": 65280, 00:15:22.935 "percent": 33 00:15:22.935 } 00:15:22.935 }, 00:15:22.935 "base_bdevs_list": [ 00:15:22.935 { 00:15:22.935 "name": "spare", 00:15:22.935 "uuid": "9d68e8f4-3d64-5ff1-b295-0122b0bb898c", 00:15:22.935 "is_configured": true, 00:15:22.935 "data_offset": 0, 00:15:22.935 "data_size": 65536 00:15:22.935 }, 00:15:22.935 { 00:15:22.935 "name": "BaseBdev2", 00:15:22.935 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:22.935 "is_configured": true, 00:15:22.935 "data_offset": 0, 00:15:22.935 "data_size": 65536 00:15:22.935 }, 00:15:22.935 { 00:15:22.935 "name": "BaseBdev3", 00:15:22.935 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:22.935 "is_configured": true, 00:15:22.935 "data_offset": 0, 00:15:22.935 "data_size": 65536 00:15:22.935 }, 00:15:22.935 { 00:15:22.935 "name": "BaseBdev4", 00:15:22.935 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:22.935 "is_configured": true, 00:15:22.935 "data_offset": 0, 00:15:22.935 "data_size": 65536 00:15:22.935 } 00:15:22.935 ] 00:15:22.935 }' 00:15:22.935 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.935 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.935 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.935 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.935 02:48:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:24.315 02:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.315 02:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.315 02:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.315 02:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.315 02:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.315 02:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.315 02:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.315 02:48:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.315 02:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.315 02:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.315 02:48:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.315 02:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.315 "name": "raid_bdev1", 00:15:24.315 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:24.316 "strip_size_kb": 64, 00:15:24.316 "state": "online", 00:15:24.316 "raid_level": "raid5f", 00:15:24.316 "superblock": false, 00:15:24.316 "num_base_bdevs": 4, 00:15:24.316 "num_base_bdevs_discovered": 4, 00:15:24.316 "num_base_bdevs_operational": 4, 00:15:24.316 "process": { 00:15:24.316 "type": "rebuild", 00:15:24.316 "target": "spare", 00:15:24.316 "progress": { 00:15:24.316 "blocks": 88320, 00:15:24.316 "percent": 44 00:15:24.316 } 00:15:24.316 }, 00:15:24.316 "base_bdevs_list": [ 00:15:24.316 { 00:15:24.316 "name": "spare", 00:15:24.316 "uuid": "9d68e8f4-3d64-5ff1-b295-0122b0bb898c", 00:15:24.316 "is_configured": true, 00:15:24.316 "data_offset": 0, 00:15:24.316 "data_size": 65536 00:15:24.316 }, 00:15:24.316 { 00:15:24.316 "name": "BaseBdev2", 00:15:24.316 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:24.316 "is_configured": true, 00:15:24.316 "data_offset": 0, 00:15:24.316 "data_size": 65536 00:15:24.316 }, 00:15:24.316 { 00:15:24.316 "name": "BaseBdev3", 00:15:24.316 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:24.316 "is_configured": true, 00:15:24.316 "data_offset": 0, 00:15:24.316 "data_size": 65536 00:15:24.316 }, 00:15:24.316 { 00:15:24.316 "name": "BaseBdev4", 00:15:24.316 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:24.316 "is_configured": true, 00:15:24.316 "data_offset": 0, 00:15:24.316 "data_size": 65536 00:15:24.316 } 00:15:24.316 ] 00:15:24.316 }' 00:15:24.316 02:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.316 02:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.316 02:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.316 02:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.316 02:48:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.253 "name": "raid_bdev1", 00:15:25.253 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:25.253 "strip_size_kb": 64, 00:15:25.253 "state": "online", 00:15:25.253 "raid_level": "raid5f", 00:15:25.253 "superblock": false, 00:15:25.253 "num_base_bdevs": 4, 00:15:25.253 "num_base_bdevs_discovered": 4, 00:15:25.253 "num_base_bdevs_operational": 4, 00:15:25.253 "process": { 00:15:25.253 "type": "rebuild", 00:15:25.253 "target": "spare", 00:15:25.253 "progress": { 00:15:25.253 "blocks": 109440, 00:15:25.253 "percent": 55 00:15:25.253 } 00:15:25.253 }, 00:15:25.253 "base_bdevs_list": [ 00:15:25.253 { 00:15:25.253 "name": "spare", 00:15:25.253 "uuid": "9d68e8f4-3d64-5ff1-b295-0122b0bb898c", 00:15:25.253 "is_configured": true, 00:15:25.253 "data_offset": 0, 00:15:25.253 "data_size": 65536 00:15:25.253 }, 00:15:25.253 { 00:15:25.253 "name": "BaseBdev2", 00:15:25.253 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:25.253 "is_configured": true, 00:15:25.253 "data_offset": 0, 00:15:25.253 "data_size": 65536 00:15:25.253 }, 00:15:25.253 { 00:15:25.253 "name": "BaseBdev3", 00:15:25.253 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:25.253 "is_configured": true, 00:15:25.253 "data_offset": 0, 00:15:25.253 "data_size": 65536 00:15:25.253 }, 00:15:25.253 { 00:15:25.253 "name": "BaseBdev4", 00:15:25.253 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:25.253 "is_configured": true, 00:15:25.253 "data_offset": 0, 00:15:25.253 "data_size": 65536 00:15:25.253 } 00:15:25.253 ] 00:15:25.253 }' 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.253 02:48:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:26.192 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.192 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.192 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.192 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.192 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.192 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.192 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.192 02:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.192 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.192 02:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.451 02:48:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.451 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.451 "name": "raid_bdev1", 00:15:26.451 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:26.451 "strip_size_kb": 64, 00:15:26.451 "state": "online", 00:15:26.451 "raid_level": "raid5f", 00:15:26.451 "superblock": false, 00:15:26.451 "num_base_bdevs": 4, 00:15:26.451 "num_base_bdevs_discovered": 4, 00:15:26.451 "num_base_bdevs_operational": 4, 00:15:26.451 "process": { 00:15:26.451 "type": "rebuild", 00:15:26.452 "target": "spare", 00:15:26.452 "progress": { 00:15:26.452 "blocks": 132480, 00:15:26.452 "percent": 67 00:15:26.452 } 00:15:26.452 }, 00:15:26.452 "base_bdevs_list": [ 00:15:26.452 { 00:15:26.452 "name": "spare", 00:15:26.452 "uuid": "9d68e8f4-3d64-5ff1-b295-0122b0bb898c", 00:15:26.452 "is_configured": true, 00:15:26.452 "data_offset": 0, 00:15:26.452 "data_size": 65536 00:15:26.452 }, 00:15:26.452 { 00:15:26.452 "name": "BaseBdev2", 00:15:26.452 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:26.452 "is_configured": true, 00:15:26.452 "data_offset": 0, 00:15:26.452 "data_size": 65536 00:15:26.452 }, 00:15:26.452 { 00:15:26.452 "name": "BaseBdev3", 00:15:26.452 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:26.452 "is_configured": true, 00:15:26.452 "data_offset": 0, 00:15:26.452 "data_size": 65536 00:15:26.452 }, 00:15:26.452 { 00:15:26.452 "name": "BaseBdev4", 00:15:26.452 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:26.452 "is_configured": true, 00:15:26.452 "data_offset": 0, 00:15:26.452 "data_size": 65536 00:15:26.452 } 00:15:26.452 ] 00:15:26.452 }' 00:15:26.452 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.452 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.452 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.452 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.452 02:48:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:27.391 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.391 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.391 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.391 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.391 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.391 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.391 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.391 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.391 02:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.391 02:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.391 02:48:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.650 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.650 "name": "raid_bdev1", 00:15:27.650 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:27.650 "strip_size_kb": 64, 00:15:27.650 "state": "online", 00:15:27.650 "raid_level": "raid5f", 00:15:27.650 "superblock": false, 00:15:27.650 "num_base_bdevs": 4, 00:15:27.650 "num_base_bdevs_discovered": 4, 00:15:27.650 "num_base_bdevs_operational": 4, 00:15:27.650 "process": { 00:15:27.650 "type": "rebuild", 00:15:27.650 "target": "spare", 00:15:27.650 "progress": { 00:15:27.650 "blocks": 153600, 00:15:27.650 "percent": 78 00:15:27.650 } 00:15:27.650 }, 00:15:27.650 "base_bdevs_list": [ 00:15:27.650 { 00:15:27.650 "name": "spare", 00:15:27.650 "uuid": "9d68e8f4-3d64-5ff1-b295-0122b0bb898c", 00:15:27.650 "is_configured": true, 00:15:27.650 "data_offset": 0, 00:15:27.650 "data_size": 65536 00:15:27.650 }, 00:15:27.650 { 00:15:27.650 "name": "BaseBdev2", 00:15:27.650 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:27.650 "is_configured": true, 00:15:27.650 "data_offset": 0, 00:15:27.650 "data_size": 65536 00:15:27.650 }, 00:15:27.650 { 00:15:27.650 "name": "BaseBdev3", 00:15:27.650 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:27.650 "is_configured": true, 00:15:27.650 "data_offset": 0, 00:15:27.650 "data_size": 65536 00:15:27.650 }, 00:15:27.650 { 00:15:27.650 "name": "BaseBdev4", 00:15:27.650 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:27.650 "is_configured": true, 00:15:27.650 "data_offset": 0, 00:15:27.650 "data_size": 65536 00:15:27.650 } 00:15:27.650 ] 00:15:27.650 }' 00:15:27.650 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.650 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.650 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.650 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.650 02:48:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.590 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.590 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.590 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.590 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.590 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.590 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.590 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.590 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.590 02:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.590 02:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.590 02:48:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.590 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.590 "name": "raid_bdev1", 00:15:28.590 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:28.590 "strip_size_kb": 64, 00:15:28.590 "state": "online", 00:15:28.590 "raid_level": "raid5f", 00:15:28.590 "superblock": false, 00:15:28.590 "num_base_bdevs": 4, 00:15:28.590 "num_base_bdevs_discovered": 4, 00:15:28.590 "num_base_bdevs_operational": 4, 00:15:28.590 "process": { 00:15:28.590 "type": "rebuild", 00:15:28.590 "target": "spare", 00:15:28.590 "progress": { 00:15:28.590 "blocks": 176640, 00:15:28.590 "percent": 89 00:15:28.590 } 00:15:28.590 }, 00:15:28.590 "base_bdevs_list": [ 00:15:28.591 { 00:15:28.591 "name": "spare", 00:15:28.591 "uuid": "9d68e8f4-3d64-5ff1-b295-0122b0bb898c", 00:15:28.591 "is_configured": true, 00:15:28.591 "data_offset": 0, 00:15:28.591 "data_size": 65536 00:15:28.591 }, 00:15:28.591 { 00:15:28.591 "name": "BaseBdev2", 00:15:28.591 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:28.591 "is_configured": true, 00:15:28.591 "data_offset": 0, 00:15:28.591 "data_size": 65536 00:15:28.591 }, 00:15:28.591 { 00:15:28.591 "name": "BaseBdev3", 00:15:28.591 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:28.591 "is_configured": true, 00:15:28.591 "data_offset": 0, 00:15:28.591 "data_size": 65536 00:15:28.591 }, 00:15:28.591 { 00:15:28.591 "name": "BaseBdev4", 00:15:28.591 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:28.591 "is_configured": true, 00:15:28.591 "data_offset": 0, 00:15:28.591 "data_size": 65536 00:15:28.591 } 00:15:28.591 ] 00:15:28.591 }' 00:15:28.591 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.850 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.850 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.850 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.850 02:48:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:29.785 [2024-12-07 02:48:40.660976] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:29.785 [2024-12-07 02:48:40.661042] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:29.785 [2024-12-07 02:48:40.661081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.785 "name": "raid_bdev1", 00:15:29.785 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:29.785 "strip_size_kb": 64, 00:15:29.785 "state": "online", 00:15:29.785 "raid_level": "raid5f", 00:15:29.785 "superblock": false, 00:15:29.785 "num_base_bdevs": 4, 00:15:29.785 "num_base_bdevs_discovered": 4, 00:15:29.785 "num_base_bdevs_operational": 4, 00:15:29.785 "base_bdevs_list": [ 00:15:29.785 { 00:15:29.785 "name": "spare", 00:15:29.785 "uuid": "9d68e8f4-3d64-5ff1-b295-0122b0bb898c", 00:15:29.785 "is_configured": true, 00:15:29.785 "data_offset": 0, 00:15:29.785 "data_size": 65536 00:15:29.785 }, 00:15:29.785 { 00:15:29.785 "name": "BaseBdev2", 00:15:29.785 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:29.785 "is_configured": true, 00:15:29.785 "data_offset": 0, 00:15:29.785 "data_size": 65536 00:15:29.785 }, 00:15:29.785 { 00:15:29.785 "name": "BaseBdev3", 00:15:29.785 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:29.785 "is_configured": true, 00:15:29.785 "data_offset": 0, 00:15:29.785 "data_size": 65536 00:15:29.785 }, 00:15:29.785 { 00:15:29.785 "name": "BaseBdev4", 00:15:29.785 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:29.785 "is_configured": true, 00:15:29.785 "data_offset": 0, 00:15:29.785 "data_size": 65536 00:15:29.785 } 00:15:29.785 ] 00:15:29.785 }' 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:29.785 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.045 "name": "raid_bdev1", 00:15:30.045 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:30.045 "strip_size_kb": 64, 00:15:30.045 "state": "online", 00:15:30.045 "raid_level": "raid5f", 00:15:30.045 "superblock": false, 00:15:30.045 "num_base_bdevs": 4, 00:15:30.045 "num_base_bdevs_discovered": 4, 00:15:30.045 "num_base_bdevs_operational": 4, 00:15:30.045 "base_bdevs_list": [ 00:15:30.045 { 00:15:30.045 "name": "spare", 00:15:30.045 "uuid": "9d68e8f4-3d64-5ff1-b295-0122b0bb898c", 00:15:30.045 "is_configured": true, 00:15:30.045 "data_offset": 0, 00:15:30.045 "data_size": 65536 00:15:30.045 }, 00:15:30.045 { 00:15:30.045 "name": "BaseBdev2", 00:15:30.045 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:30.045 "is_configured": true, 00:15:30.045 "data_offset": 0, 00:15:30.045 "data_size": 65536 00:15:30.045 }, 00:15:30.045 { 00:15:30.045 "name": "BaseBdev3", 00:15:30.045 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:30.045 "is_configured": true, 00:15:30.045 "data_offset": 0, 00:15:30.045 "data_size": 65536 00:15:30.045 }, 00:15:30.045 { 00:15:30.045 "name": "BaseBdev4", 00:15:30.045 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:30.045 "is_configured": true, 00:15:30.045 "data_offset": 0, 00:15:30.045 "data_size": 65536 00:15:30.045 } 00:15:30.045 ] 00:15:30.045 }' 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:30.045 02:48:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.045 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.045 "name": "raid_bdev1", 00:15:30.045 "uuid": "d11e0ae3-f85d-4f44-a464-2ca31b627c47", 00:15:30.045 "strip_size_kb": 64, 00:15:30.045 "state": "online", 00:15:30.045 "raid_level": "raid5f", 00:15:30.045 "superblock": false, 00:15:30.045 "num_base_bdevs": 4, 00:15:30.045 "num_base_bdevs_discovered": 4, 00:15:30.045 "num_base_bdevs_operational": 4, 00:15:30.045 "base_bdevs_list": [ 00:15:30.045 { 00:15:30.045 "name": "spare", 00:15:30.045 "uuid": "9d68e8f4-3d64-5ff1-b295-0122b0bb898c", 00:15:30.045 "is_configured": true, 00:15:30.045 "data_offset": 0, 00:15:30.045 "data_size": 65536 00:15:30.045 }, 00:15:30.045 { 00:15:30.045 "name": "BaseBdev2", 00:15:30.045 "uuid": "632e3423-5c58-5492-8fdd-32960cad875f", 00:15:30.045 "is_configured": true, 00:15:30.045 "data_offset": 0, 00:15:30.045 "data_size": 65536 00:15:30.045 }, 00:15:30.045 { 00:15:30.045 "name": "BaseBdev3", 00:15:30.046 "uuid": "9b5a3367-26cb-551d-b1be-f8c6ef329191", 00:15:30.046 "is_configured": true, 00:15:30.046 "data_offset": 0, 00:15:30.046 "data_size": 65536 00:15:30.046 }, 00:15:30.046 { 00:15:30.046 "name": "BaseBdev4", 00:15:30.046 "uuid": "3c5b504b-f007-5ab3-b132-1269bbd91a6b", 00:15:30.046 "is_configured": true, 00:15:30.046 "data_offset": 0, 00:15:30.046 "data_size": 65536 00:15:30.046 } 00:15:30.046 ] 00:15:30.046 }' 00:15:30.046 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.046 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.614 [2024-12-07 02:48:41.445154] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:30.614 [2024-12-07 02:48:41.445225] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.614 [2024-12-07 02:48:41.445323] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.614 [2024-12-07 02:48:41.445408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.614 [2024-12-07 02:48:41.445420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:30.614 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:30.873 /dev/nbd0 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:30.873 1+0 records in 00:15:30.873 1+0 records out 00:15:30.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583881 s, 7.0 MB/s 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:30.873 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:31.133 /dev/nbd1 00:15:31.133 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:31.133 02:48:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:31.133 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:31.133 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:31.133 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:31.133 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:31.133 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:31.133 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:31.133 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:31.133 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:31.134 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:31.134 1+0 records in 00:15:31.134 1+0 records out 00:15:31.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230406 s, 17.8 MB/s 00:15:31.134 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.134 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:31.134 02:48:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:31.134 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:31.134 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:31.134 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:31.134 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:31.134 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:31.134 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:31.134 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:31.134 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:31.134 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.134 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:31.134 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.134 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:31.393 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:31.393 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:31.393 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:31.393 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.393 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.393 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:31.393 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:31.393 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.393 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.393 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 95288 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 95288 ']' 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 95288 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95288 00:15:31.653 killing process with pid 95288 00:15:31.653 Received shutdown signal, test time was about 60.000000 seconds 00:15:31.653 00:15:31.653 Latency(us) 00:15:31.653 [2024-12-07T02:48:42.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.653 [2024-12-07T02:48:42.731Z] =================================================================================================================== 00:15:31.653 [2024-12-07T02:48:42.731Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95288' 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 95288 00:15:31.653 [2024-12-07 02:48:42.533107] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.653 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 95288 00:15:31.653 [2024-12-07 02:48:42.583499] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:31.914 00:15:31.914 real 0m17.318s 00:15:31.914 user 0m20.969s 00:15:31.914 sys 0m2.410s 00:15:31.914 ************************************ 00:15:31.914 END TEST raid5f_rebuild_test 00:15:31.914 ************************************ 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.914 02:48:42 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:15:31.914 02:48:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:31.914 02:48:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:31.914 02:48:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.914 ************************************ 00:15:31.914 START TEST raid5f_rebuild_test_sb 00:15:31.914 ************************************ 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95769 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95769 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95769 ']' 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:31.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:31.914 02:48:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.179 [2024-12-07 02:48:42.996798] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:32.179 [2024-12-07 02:48:42.997011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95769 ] 00:15:32.179 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:32.179 Zero copy mechanism will not be used. 00:15:32.179 [2024-12-07 02:48:43.158867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.179 [2024-12-07 02:48:43.205210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.179 [2024-12-07 02:48:43.247608] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.179 [2024-12-07 02:48:43.247718] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.746 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:32.746 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:32.746 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:32.746 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:32.746 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.746 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.746 BaseBdev1_malloc 00:15:32.746 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.746 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:32.746 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.746 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.746 [2024-12-07 02:48:43.822046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:32.746 [2024-12-07 02:48:43.822112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.746 [2024-12-07 02:48:43.822145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:32.746 [2024-12-07 02:48:43.822159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.006 [2024-12-07 02:48:43.824294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.006 [2024-12-07 02:48:43.824333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.006 BaseBdev1 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 BaseBdev2_malloc 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 [2024-12-07 02:48:43.863118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:33.006 [2024-12-07 02:48:43.863213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.006 [2024-12-07 02:48:43.863256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:33.006 [2024-12-07 02:48:43.863275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.006 [2024-12-07 02:48:43.867678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.006 [2024-12-07 02:48:43.867743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:33.006 BaseBdev2 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 BaseBdev3_malloc 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 [2024-12-07 02:48:43.893893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:33.006 [2024-12-07 02:48:43.893984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.006 [2024-12-07 02:48:43.894014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:33.006 [2024-12-07 02:48:43.894023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.006 [2024-12-07 02:48:43.896117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.006 [2024-12-07 02:48:43.896158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:33.006 BaseBdev3 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 BaseBdev4_malloc 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 [2024-12-07 02:48:43.922490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:33.006 [2024-12-07 02:48:43.922541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.006 [2024-12-07 02:48:43.922565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:33.006 [2024-12-07 02:48:43.922573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.006 [2024-12-07 02:48:43.924624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.006 [2024-12-07 02:48:43.924701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:33.006 BaseBdev4 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 spare_malloc 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 spare_delay 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 [2024-12-07 02:48:43.962882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:33.006 [2024-12-07 02:48:43.962931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.006 [2024-12-07 02:48:43.962952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:33.006 [2024-12-07 02:48:43.962960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.006 [2024-12-07 02:48:43.964946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.006 [2024-12-07 02:48:43.964984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:33.006 spare 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.006 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.006 [2024-12-07 02:48:43.974953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.006 [2024-12-07 02:48:43.976759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.006 [2024-12-07 02:48:43.976825] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:33.006 [2024-12-07 02:48:43.976863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:33.006 [2024-12-07 02:48:43.977024] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:15:33.006 [2024-12-07 02:48:43.977040] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:33.006 [2024-12-07 02:48:43.977295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:33.006 [2024-12-07 02:48:43.977757] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:15:33.006 [2024-12-07 02:48:43.977771] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:15:33.007 [2024-12-07 02:48:43.977883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.007 02:48:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.007 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.007 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.007 "name": "raid_bdev1", 00:15:33.007 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:33.007 "strip_size_kb": 64, 00:15:33.007 "state": "online", 00:15:33.007 "raid_level": "raid5f", 00:15:33.007 "superblock": true, 00:15:33.007 "num_base_bdevs": 4, 00:15:33.007 "num_base_bdevs_discovered": 4, 00:15:33.007 "num_base_bdevs_operational": 4, 00:15:33.007 "base_bdevs_list": [ 00:15:33.007 { 00:15:33.007 "name": "BaseBdev1", 00:15:33.007 "uuid": "b5b9110a-b7ea-52f9-91d8-e3956bb97893", 00:15:33.007 "is_configured": true, 00:15:33.007 "data_offset": 2048, 00:15:33.007 "data_size": 63488 00:15:33.007 }, 00:15:33.007 { 00:15:33.007 "name": "BaseBdev2", 00:15:33.007 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:33.007 "is_configured": true, 00:15:33.007 "data_offset": 2048, 00:15:33.007 "data_size": 63488 00:15:33.007 }, 00:15:33.007 { 00:15:33.007 "name": "BaseBdev3", 00:15:33.007 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:33.007 "is_configured": true, 00:15:33.007 "data_offset": 2048, 00:15:33.007 "data_size": 63488 00:15:33.007 }, 00:15:33.007 { 00:15:33.007 "name": "BaseBdev4", 00:15:33.007 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:33.007 "is_configured": true, 00:15:33.007 "data_offset": 2048, 00:15:33.007 "data_size": 63488 00:15:33.007 } 00:15:33.007 ] 00:15:33.007 }' 00:15:33.007 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.007 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:33.576 [2024-12-07 02:48:44.431028] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.576 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:33.836 [2024-12-07 02:48:44.690468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:33.836 /dev/nbd0 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.836 1+0 records in 00:15:33.836 1+0 records out 00:15:33.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429552 s, 9.5 MB/s 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:33.836 02:48:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:15:34.406 496+0 records in 00:15:34.406 496+0 records out 00:15:34.406 97517568 bytes (98 MB, 93 MiB) copied, 0.51856 s, 188 MB/s 00:15:34.406 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:34.406 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.406 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:34.406 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.406 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:34.406 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.406 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.665 [2024-12-07 02:48:45.502158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.665 [2024-12-07 02:48:45.510220] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.665 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.666 "name": "raid_bdev1", 00:15:34.666 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:34.666 "strip_size_kb": 64, 00:15:34.666 "state": "online", 00:15:34.666 "raid_level": "raid5f", 00:15:34.666 "superblock": true, 00:15:34.666 "num_base_bdevs": 4, 00:15:34.666 "num_base_bdevs_discovered": 3, 00:15:34.666 "num_base_bdevs_operational": 3, 00:15:34.666 "base_bdevs_list": [ 00:15:34.666 { 00:15:34.666 "name": null, 00:15:34.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.666 "is_configured": false, 00:15:34.666 "data_offset": 0, 00:15:34.666 "data_size": 63488 00:15:34.666 }, 00:15:34.666 { 00:15:34.666 "name": "BaseBdev2", 00:15:34.666 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:34.666 "is_configured": true, 00:15:34.666 "data_offset": 2048, 00:15:34.666 "data_size": 63488 00:15:34.666 }, 00:15:34.666 { 00:15:34.666 "name": "BaseBdev3", 00:15:34.666 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:34.666 "is_configured": true, 00:15:34.666 "data_offset": 2048, 00:15:34.666 "data_size": 63488 00:15:34.666 }, 00:15:34.666 { 00:15:34.666 "name": "BaseBdev4", 00:15:34.666 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:34.666 "is_configured": true, 00:15:34.666 "data_offset": 2048, 00:15:34.666 "data_size": 63488 00:15:34.666 } 00:15:34.666 ] 00:15:34.666 }' 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.666 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.925 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:34.925 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.925 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.925 [2024-12-07 02:48:45.977561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.925 [2024-12-07 02:48:45.983681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:15:34.925 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.925 02:48:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:34.925 [2024-12-07 02:48:45.986077] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.303 02:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.303 02:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.303 02:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.303 02:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.303 02:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.303 02:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.303 02:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.303 02:48:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.303 02:48:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.303 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.303 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.303 "name": "raid_bdev1", 00:15:36.303 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:36.303 "strip_size_kb": 64, 00:15:36.303 "state": "online", 00:15:36.303 "raid_level": "raid5f", 00:15:36.303 "superblock": true, 00:15:36.303 "num_base_bdevs": 4, 00:15:36.303 "num_base_bdevs_discovered": 4, 00:15:36.303 "num_base_bdevs_operational": 4, 00:15:36.303 "process": { 00:15:36.303 "type": "rebuild", 00:15:36.304 "target": "spare", 00:15:36.304 "progress": { 00:15:36.304 "blocks": 19200, 00:15:36.304 "percent": 10 00:15:36.304 } 00:15:36.304 }, 00:15:36.304 "base_bdevs_list": [ 00:15:36.304 { 00:15:36.304 "name": "spare", 00:15:36.304 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:36.304 "is_configured": true, 00:15:36.304 "data_offset": 2048, 00:15:36.304 "data_size": 63488 00:15:36.304 }, 00:15:36.304 { 00:15:36.304 "name": "BaseBdev2", 00:15:36.304 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:36.304 "is_configured": true, 00:15:36.304 "data_offset": 2048, 00:15:36.304 "data_size": 63488 00:15:36.304 }, 00:15:36.304 { 00:15:36.304 "name": "BaseBdev3", 00:15:36.304 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:36.304 "is_configured": true, 00:15:36.304 "data_offset": 2048, 00:15:36.304 "data_size": 63488 00:15:36.304 }, 00:15:36.304 { 00:15:36.304 "name": "BaseBdev4", 00:15:36.304 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:36.304 "is_configured": true, 00:15:36.304 "data_offset": 2048, 00:15:36.304 "data_size": 63488 00:15:36.304 } 00:15:36.304 ] 00:15:36.304 }' 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.304 [2024-12-07 02:48:47.145469] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.304 [2024-12-07 02:48:47.192761] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:36.304 [2024-12-07 02:48:47.192821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.304 [2024-12-07 02:48:47.192843] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:36.304 [2024-12-07 02:48:47.192853] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.304 "name": "raid_bdev1", 00:15:36.304 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:36.304 "strip_size_kb": 64, 00:15:36.304 "state": "online", 00:15:36.304 "raid_level": "raid5f", 00:15:36.304 "superblock": true, 00:15:36.304 "num_base_bdevs": 4, 00:15:36.304 "num_base_bdevs_discovered": 3, 00:15:36.304 "num_base_bdevs_operational": 3, 00:15:36.304 "base_bdevs_list": [ 00:15:36.304 { 00:15:36.304 "name": null, 00:15:36.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.304 "is_configured": false, 00:15:36.304 "data_offset": 0, 00:15:36.304 "data_size": 63488 00:15:36.304 }, 00:15:36.304 { 00:15:36.304 "name": "BaseBdev2", 00:15:36.304 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:36.304 "is_configured": true, 00:15:36.304 "data_offset": 2048, 00:15:36.304 "data_size": 63488 00:15:36.304 }, 00:15:36.304 { 00:15:36.304 "name": "BaseBdev3", 00:15:36.304 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:36.304 "is_configured": true, 00:15:36.304 "data_offset": 2048, 00:15:36.304 "data_size": 63488 00:15:36.304 }, 00:15:36.304 { 00:15:36.304 "name": "BaseBdev4", 00:15:36.304 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:36.304 "is_configured": true, 00:15:36.304 "data_offset": 2048, 00:15:36.304 "data_size": 63488 00:15:36.304 } 00:15:36.304 ] 00:15:36.304 }' 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.304 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.872 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.872 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.872 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.872 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.873 "name": "raid_bdev1", 00:15:36.873 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:36.873 "strip_size_kb": 64, 00:15:36.873 "state": "online", 00:15:36.873 "raid_level": "raid5f", 00:15:36.873 "superblock": true, 00:15:36.873 "num_base_bdevs": 4, 00:15:36.873 "num_base_bdevs_discovered": 3, 00:15:36.873 "num_base_bdevs_operational": 3, 00:15:36.873 "base_bdevs_list": [ 00:15:36.873 { 00:15:36.873 "name": null, 00:15:36.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.873 "is_configured": false, 00:15:36.873 "data_offset": 0, 00:15:36.873 "data_size": 63488 00:15:36.873 }, 00:15:36.873 { 00:15:36.873 "name": "BaseBdev2", 00:15:36.873 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:36.873 "is_configured": true, 00:15:36.873 "data_offset": 2048, 00:15:36.873 "data_size": 63488 00:15:36.873 }, 00:15:36.873 { 00:15:36.873 "name": "BaseBdev3", 00:15:36.873 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:36.873 "is_configured": true, 00:15:36.873 "data_offset": 2048, 00:15:36.873 "data_size": 63488 00:15:36.873 }, 00:15:36.873 { 00:15:36.873 "name": "BaseBdev4", 00:15:36.873 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:36.873 "is_configured": true, 00:15:36.873 "data_offset": 2048, 00:15:36.873 "data_size": 63488 00:15:36.873 } 00:15:36.873 ] 00:15:36.873 }' 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.873 [2024-12-07 02:48:47.808546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.873 [2024-12-07 02:48:47.814311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.873 02:48:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:36.873 [2024-12-07 02:48:47.816725] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:37.810 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.810 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.810 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.810 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.810 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.810 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.810 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.810 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.810 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.810 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.810 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.810 "name": "raid_bdev1", 00:15:37.810 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:37.810 "strip_size_kb": 64, 00:15:37.810 "state": "online", 00:15:37.810 "raid_level": "raid5f", 00:15:37.810 "superblock": true, 00:15:37.810 "num_base_bdevs": 4, 00:15:37.810 "num_base_bdevs_discovered": 4, 00:15:37.810 "num_base_bdevs_operational": 4, 00:15:37.810 "process": { 00:15:37.810 "type": "rebuild", 00:15:37.810 "target": "spare", 00:15:37.810 "progress": { 00:15:37.810 "blocks": 19200, 00:15:37.810 "percent": 10 00:15:37.810 } 00:15:37.810 }, 00:15:37.810 "base_bdevs_list": [ 00:15:37.810 { 00:15:37.810 "name": "spare", 00:15:37.810 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:37.810 "is_configured": true, 00:15:37.810 "data_offset": 2048, 00:15:37.810 "data_size": 63488 00:15:37.810 }, 00:15:37.810 { 00:15:37.810 "name": "BaseBdev2", 00:15:37.810 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:37.810 "is_configured": true, 00:15:37.810 "data_offset": 2048, 00:15:37.810 "data_size": 63488 00:15:37.810 }, 00:15:37.810 { 00:15:37.810 "name": "BaseBdev3", 00:15:37.810 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:37.810 "is_configured": true, 00:15:37.810 "data_offset": 2048, 00:15:37.810 "data_size": 63488 00:15:37.810 }, 00:15:37.810 { 00:15:37.810 "name": "BaseBdev4", 00:15:37.810 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:37.810 "is_configured": true, 00:15:37.810 "data_offset": 2048, 00:15:37.810 "data_size": 63488 00:15:37.810 } 00:15:37.810 ] 00:15:37.810 }' 00:15:37.810 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:38.069 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=541 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.069 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.069 "name": "raid_bdev1", 00:15:38.069 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:38.069 "strip_size_kb": 64, 00:15:38.069 "state": "online", 00:15:38.069 "raid_level": "raid5f", 00:15:38.069 "superblock": true, 00:15:38.069 "num_base_bdevs": 4, 00:15:38.069 "num_base_bdevs_discovered": 4, 00:15:38.069 "num_base_bdevs_operational": 4, 00:15:38.069 "process": { 00:15:38.069 "type": "rebuild", 00:15:38.069 "target": "spare", 00:15:38.069 "progress": { 00:15:38.069 "blocks": 21120, 00:15:38.069 "percent": 11 00:15:38.069 } 00:15:38.069 }, 00:15:38.069 "base_bdevs_list": [ 00:15:38.069 { 00:15:38.069 "name": "spare", 00:15:38.069 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:38.069 "is_configured": true, 00:15:38.069 "data_offset": 2048, 00:15:38.069 "data_size": 63488 00:15:38.069 }, 00:15:38.069 { 00:15:38.069 "name": "BaseBdev2", 00:15:38.069 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:38.069 "is_configured": true, 00:15:38.069 "data_offset": 2048, 00:15:38.069 "data_size": 63488 00:15:38.069 }, 00:15:38.069 { 00:15:38.069 "name": "BaseBdev3", 00:15:38.069 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:38.069 "is_configured": true, 00:15:38.069 "data_offset": 2048, 00:15:38.069 "data_size": 63488 00:15:38.069 }, 00:15:38.069 { 00:15:38.069 "name": "BaseBdev4", 00:15:38.070 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:38.070 "is_configured": true, 00:15:38.070 "data_offset": 2048, 00:15:38.070 "data_size": 63488 00:15:38.070 } 00:15:38.070 ] 00:15:38.070 }' 00:15:38.070 02:48:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.070 02:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.070 02:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.070 02:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.070 02:48:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.448 "name": "raid_bdev1", 00:15:39.448 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:39.448 "strip_size_kb": 64, 00:15:39.448 "state": "online", 00:15:39.448 "raid_level": "raid5f", 00:15:39.448 "superblock": true, 00:15:39.448 "num_base_bdevs": 4, 00:15:39.448 "num_base_bdevs_discovered": 4, 00:15:39.448 "num_base_bdevs_operational": 4, 00:15:39.448 "process": { 00:15:39.448 "type": "rebuild", 00:15:39.448 "target": "spare", 00:15:39.448 "progress": { 00:15:39.448 "blocks": 42240, 00:15:39.448 "percent": 22 00:15:39.448 } 00:15:39.448 }, 00:15:39.448 "base_bdevs_list": [ 00:15:39.448 { 00:15:39.448 "name": "spare", 00:15:39.448 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:39.448 "is_configured": true, 00:15:39.448 "data_offset": 2048, 00:15:39.448 "data_size": 63488 00:15:39.448 }, 00:15:39.448 { 00:15:39.448 "name": "BaseBdev2", 00:15:39.448 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:39.448 "is_configured": true, 00:15:39.448 "data_offset": 2048, 00:15:39.448 "data_size": 63488 00:15:39.448 }, 00:15:39.448 { 00:15:39.448 "name": "BaseBdev3", 00:15:39.448 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:39.448 "is_configured": true, 00:15:39.448 "data_offset": 2048, 00:15:39.448 "data_size": 63488 00:15:39.448 }, 00:15:39.448 { 00:15:39.448 "name": "BaseBdev4", 00:15:39.448 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:39.448 "is_configured": true, 00:15:39.448 "data_offset": 2048, 00:15:39.448 "data_size": 63488 00:15:39.448 } 00:15:39.448 ] 00:15:39.448 }' 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.448 02:48:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.388 "name": "raid_bdev1", 00:15:40.388 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:40.388 "strip_size_kb": 64, 00:15:40.388 "state": "online", 00:15:40.388 "raid_level": "raid5f", 00:15:40.388 "superblock": true, 00:15:40.388 "num_base_bdevs": 4, 00:15:40.388 "num_base_bdevs_discovered": 4, 00:15:40.388 "num_base_bdevs_operational": 4, 00:15:40.388 "process": { 00:15:40.388 "type": "rebuild", 00:15:40.388 "target": "spare", 00:15:40.388 "progress": { 00:15:40.388 "blocks": 65280, 00:15:40.388 "percent": 34 00:15:40.388 } 00:15:40.388 }, 00:15:40.388 "base_bdevs_list": [ 00:15:40.388 { 00:15:40.388 "name": "spare", 00:15:40.388 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:40.388 "is_configured": true, 00:15:40.388 "data_offset": 2048, 00:15:40.388 "data_size": 63488 00:15:40.388 }, 00:15:40.388 { 00:15:40.388 "name": "BaseBdev2", 00:15:40.388 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:40.388 "is_configured": true, 00:15:40.388 "data_offset": 2048, 00:15:40.388 "data_size": 63488 00:15:40.388 }, 00:15:40.388 { 00:15:40.388 "name": "BaseBdev3", 00:15:40.388 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:40.388 "is_configured": true, 00:15:40.388 "data_offset": 2048, 00:15:40.388 "data_size": 63488 00:15:40.388 }, 00:15:40.388 { 00:15:40.388 "name": "BaseBdev4", 00:15:40.388 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:40.388 "is_configured": true, 00:15:40.388 "data_offset": 2048, 00:15:40.388 "data_size": 63488 00:15:40.388 } 00:15:40.388 ] 00:15:40.388 }' 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.388 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.389 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.389 02:48:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:41.328 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.328 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.328 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.328 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.328 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.328 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.328 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.328 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.328 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.328 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.588 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.588 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.588 "name": "raid_bdev1", 00:15:41.588 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:41.588 "strip_size_kb": 64, 00:15:41.588 "state": "online", 00:15:41.588 "raid_level": "raid5f", 00:15:41.588 "superblock": true, 00:15:41.588 "num_base_bdevs": 4, 00:15:41.588 "num_base_bdevs_discovered": 4, 00:15:41.588 "num_base_bdevs_operational": 4, 00:15:41.588 "process": { 00:15:41.588 "type": "rebuild", 00:15:41.588 "target": "spare", 00:15:41.588 "progress": { 00:15:41.588 "blocks": 86400, 00:15:41.588 "percent": 45 00:15:41.588 } 00:15:41.588 }, 00:15:41.588 "base_bdevs_list": [ 00:15:41.588 { 00:15:41.588 "name": "spare", 00:15:41.588 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:41.588 "is_configured": true, 00:15:41.588 "data_offset": 2048, 00:15:41.588 "data_size": 63488 00:15:41.588 }, 00:15:41.588 { 00:15:41.588 "name": "BaseBdev2", 00:15:41.588 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:41.588 "is_configured": true, 00:15:41.588 "data_offset": 2048, 00:15:41.588 "data_size": 63488 00:15:41.588 }, 00:15:41.588 { 00:15:41.588 "name": "BaseBdev3", 00:15:41.588 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:41.588 "is_configured": true, 00:15:41.588 "data_offset": 2048, 00:15:41.588 "data_size": 63488 00:15:41.588 }, 00:15:41.588 { 00:15:41.588 "name": "BaseBdev4", 00:15:41.588 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:41.588 "is_configured": true, 00:15:41.588 "data_offset": 2048, 00:15:41.588 "data_size": 63488 00:15:41.588 } 00:15:41.588 ] 00:15:41.588 }' 00:15:41.588 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.588 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.588 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.588 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.588 02:48:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:42.527 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.527 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.527 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.527 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.527 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.527 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.527 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.527 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.527 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.527 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.527 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.527 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.527 "name": "raid_bdev1", 00:15:42.527 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:42.527 "strip_size_kb": 64, 00:15:42.527 "state": "online", 00:15:42.527 "raid_level": "raid5f", 00:15:42.527 "superblock": true, 00:15:42.527 "num_base_bdevs": 4, 00:15:42.527 "num_base_bdevs_discovered": 4, 00:15:42.527 "num_base_bdevs_operational": 4, 00:15:42.527 "process": { 00:15:42.527 "type": "rebuild", 00:15:42.527 "target": "spare", 00:15:42.527 "progress": { 00:15:42.527 "blocks": 107520, 00:15:42.527 "percent": 56 00:15:42.527 } 00:15:42.527 }, 00:15:42.527 "base_bdevs_list": [ 00:15:42.527 { 00:15:42.528 "name": "spare", 00:15:42.528 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:42.528 "is_configured": true, 00:15:42.528 "data_offset": 2048, 00:15:42.528 "data_size": 63488 00:15:42.528 }, 00:15:42.528 { 00:15:42.528 "name": "BaseBdev2", 00:15:42.528 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:42.528 "is_configured": true, 00:15:42.528 "data_offset": 2048, 00:15:42.528 "data_size": 63488 00:15:42.528 }, 00:15:42.528 { 00:15:42.528 "name": "BaseBdev3", 00:15:42.528 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:42.528 "is_configured": true, 00:15:42.528 "data_offset": 2048, 00:15:42.528 "data_size": 63488 00:15:42.528 }, 00:15:42.528 { 00:15:42.528 "name": "BaseBdev4", 00:15:42.528 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:42.528 "is_configured": true, 00:15:42.528 "data_offset": 2048, 00:15:42.528 "data_size": 63488 00:15:42.528 } 00:15:42.528 ] 00:15:42.528 }' 00:15:42.528 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.787 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.787 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.787 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.787 02:48:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.728 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.728 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.728 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.728 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.728 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.728 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.728 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.728 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.728 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.728 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.728 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.728 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.728 "name": "raid_bdev1", 00:15:43.728 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:43.728 "strip_size_kb": 64, 00:15:43.728 "state": "online", 00:15:43.728 "raid_level": "raid5f", 00:15:43.728 "superblock": true, 00:15:43.728 "num_base_bdevs": 4, 00:15:43.728 "num_base_bdevs_discovered": 4, 00:15:43.728 "num_base_bdevs_operational": 4, 00:15:43.728 "process": { 00:15:43.728 "type": "rebuild", 00:15:43.728 "target": "spare", 00:15:43.728 "progress": { 00:15:43.728 "blocks": 130560, 00:15:43.728 "percent": 68 00:15:43.728 } 00:15:43.728 }, 00:15:43.728 "base_bdevs_list": [ 00:15:43.728 { 00:15:43.728 "name": "spare", 00:15:43.728 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:43.728 "is_configured": true, 00:15:43.728 "data_offset": 2048, 00:15:43.728 "data_size": 63488 00:15:43.728 }, 00:15:43.728 { 00:15:43.728 "name": "BaseBdev2", 00:15:43.728 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:43.728 "is_configured": true, 00:15:43.728 "data_offset": 2048, 00:15:43.728 "data_size": 63488 00:15:43.728 }, 00:15:43.728 { 00:15:43.728 "name": "BaseBdev3", 00:15:43.729 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:43.729 "is_configured": true, 00:15:43.729 "data_offset": 2048, 00:15:43.729 "data_size": 63488 00:15:43.729 }, 00:15:43.729 { 00:15:43.729 "name": "BaseBdev4", 00:15:43.729 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:43.729 "is_configured": true, 00:15:43.729 "data_offset": 2048, 00:15:43.729 "data_size": 63488 00:15:43.729 } 00:15:43.729 ] 00:15:43.729 }' 00:15:43.729 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.729 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.729 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.989 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.989 02:48:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:44.925 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.925 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.925 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.925 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.925 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.925 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.925 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.925 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.925 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.926 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.926 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.926 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.926 "name": "raid_bdev1", 00:15:44.926 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:44.926 "strip_size_kb": 64, 00:15:44.926 "state": "online", 00:15:44.926 "raid_level": "raid5f", 00:15:44.926 "superblock": true, 00:15:44.926 "num_base_bdevs": 4, 00:15:44.926 "num_base_bdevs_discovered": 4, 00:15:44.926 "num_base_bdevs_operational": 4, 00:15:44.926 "process": { 00:15:44.926 "type": "rebuild", 00:15:44.926 "target": "spare", 00:15:44.926 "progress": { 00:15:44.926 "blocks": 153600, 00:15:44.926 "percent": 80 00:15:44.926 } 00:15:44.926 }, 00:15:44.926 "base_bdevs_list": [ 00:15:44.926 { 00:15:44.926 "name": "spare", 00:15:44.926 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:44.926 "is_configured": true, 00:15:44.926 "data_offset": 2048, 00:15:44.926 "data_size": 63488 00:15:44.926 }, 00:15:44.926 { 00:15:44.926 "name": "BaseBdev2", 00:15:44.926 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:44.926 "is_configured": true, 00:15:44.926 "data_offset": 2048, 00:15:44.926 "data_size": 63488 00:15:44.926 }, 00:15:44.926 { 00:15:44.926 "name": "BaseBdev3", 00:15:44.926 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:44.926 "is_configured": true, 00:15:44.926 "data_offset": 2048, 00:15:44.926 "data_size": 63488 00:15:44.926 }, 00:15:44.926 { 00:15:44.926 "name": "BaseBdev4", 00:15:44.926 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:44.926 "is_configured": true, 00:15:44.926 "data_offset": 2048, 00:15:44.926 "data_size": 63488 00:15:44.926 } 00:15:44.926 ] 00:15:44.926 }' 00:15:44.926 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.926 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.926 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.926 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.926 02:48:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.311 02:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.311 02:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.311 02:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.311 02:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.311 02:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.311 02:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.311 02:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.311 02:48:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.311 02:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.311 02:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.311 02:48:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.311 02:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.311 "name": "raid_bdev1", 00:15:46.312 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:46.312 "strip_size_kb": 64, 00:15:46.312 "state": "online", 00:15:46.312 "raid_level": "raid5f", 00:15:46.312 "superblock": true, 00:15:46.312 "num_base_bdevs": 4, 00:15:46.312 "num_base_bdevs_discovered": 4, 00:15:46.312 "num_base_bdevs_operational": 4, 00:15:46.312 "process": { 00:15:46.312 "type": "rebuild", 00:15:46.312 "target": "spare", 00:15:46.312 "progress": { 00:15:46.312 "blocks": 174720, 00:15:46.312 "percent": 91 00:15:46.312 } 00:15:46.312 }, 00:15:46.312 "base_bdevs_list": [ 00:15:46.312 { 00:15:46.312 "name": "spare", 00:15:46.312 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:46.312 "is_configured": true, 00:15:46.312 "data_offset": 2048, 00:15:46.312 "data_size": 63488 00:15:46.312 }, 00:15:46.312 { 00:15:46.312 "name": "BaseBdev2", 00:15:46.312 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:46.312 "is_configured": true, 00:15:46.312 "data_offset": 2048, 00:15:46.312 "data_size": 63488 00:15:46.312 }, 00:15:46.312 { 00:15:46.312 "name": "BaseBdev3", 00:15:46.312 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:46.312 "is_configured": true, 00:15:46.312 "data_offset": 2048, 00:15:46.312 "data_size": 63488 00:15:46.312 }, 00:15:46.312 { 00:15:46.312 "name": "BaseBdev4", 00:15:46.312 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:46.312 "is_configured": true, 00:15:46.312 "data_offset": 2048, 00:15:46.312 "data_size": 63488 00:15:46.312 } 00:15:46.312 ] 00:15:46.312 }' 00:15:46.312 02:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.312 02:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.312 02:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.312 02:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.312 02:48:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.909 [2024-12-07 02:48:57.860036] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:46.909 [2024-12-07 02:48:57.860186] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:46.909 [2024-12-07 02:48:57.860340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.181 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.181 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.181 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.181 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.181 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.181 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.181 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.181 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.181 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.181 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.181 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.181 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.181 "name": "raid_bdev1", 00:15:47.181 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:47.181 "strip_size_kb": 64, 00:15:47.181 "state": "online", 00:15:47.181 "raid_level": "raid5f", 00:15:47.181 "superblock": true, 00:15:47.181 "num_base_bdevs": 4, 00:15:47.181 "num_base_bdevs_discovered": 4, 00:15:47.181 "num_base_bdevs_operational": 4, 00:15:47.181 "base_bdevs_list": [ 00:15:47.181 { 00:15:47.181 "name": "spare", 00:15:47.181 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:47.181 "is_configured": true, 00:15:47.181 "data_offset": 2048, 00:15:47.181 "data_size": 63488 00:15:47.181 }, 00:15:47.181 { 00:15:47.181 "name": "BaseBdev2", 00:15:47.181 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:47.181 "is_configured": true, 00:15:47.181 "data_offset": 2048, 00:15:47.181 "data_size": 63488 00:15:47.181 }, 00:15:47.181 { 00:15:47.181 "name": "BaseBdev3", 00:15:47.181 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:47.181 "is_configured": true, 00:15:47.181 "data_offset": 2048, 00:15:47.181 "data_size": 63488 00:15:47.181 }, 00:15:47.181 { 00:15:47.181 "name": "BaseBdev4", 00:15:47.181 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:47.181 "is_configured": true, 00:15:47.181 "data_offset": 2048, 00:15:47.182 "data_size": 63488 00:15:47.182 } 00:15:47.182 ] 00:15:47.182 }' 00:15:47.182 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.182 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:47.182 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.442 "name": "raid_bdev1", 00:15:47.442 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:47.442 "strip_size_kb": 64, 00:15:47.442 "state": "online", 00:15:47.442 "raid_level": "raid5f", 00:15:47.442 "superblock": true, 00:15:47.442 "num_base_bdevs": 4, 00:15:47.442 "num_base_bdevs_discovered": 4, 00:15:47.442 "num_base_bdevs_operational": 4, 00:15:47.442 "base_bdevs_list": [ 00:15:47.442 { 00:15:47.442 "name": "spare", 00:15:47.442 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:47.442 "is_configured": true, 00:15:47.442 "data_offset": 2048, 00:15:47.442 "data_size": 63488 00:15:47.442 }, 00:15:47.442 { 00:15:47.442 "name": "BaseBdev2", 00:15:47.442 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:47.442 "is_configured": true, 00:15:47.442 "data_offset": 2048, 00:15:47.442 "data_size": 63488 00:15:47.442 }, 00:15:47.442 { 00:15:47.442 "name": "BaseBdev3", 00:15:47.442 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:47.442 "is_configured": true, 00:15:47.442 "data_offset": 2048, 00:15:47.442 "data_size": 63488 00:15:47.442 }, 00:15:47.442 { 00:15:47.442 "name": "BaseBdev4", 00:15:47.442 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:47.442 "is_configured": true, 00:15:47.442 "data_offset": 2048, 00:15:47.442 "data_size": 63488 00:15:47.442 } 00:15:47.442 ] 00:15:47.442 }' 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.442 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.443 "name": "raid_bdev1", 00:15:47.443 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:47.443 "strip_size_kb": 64, 00:15:47.443 "state": "online", 00:15:47.443 "raid_level": "raid5f", 00:15:47.443 "superblock": true, 00:15:47.443 "num_base_bdevs": 4, 00:15:47.443 "num_base_bdevs_discovered": 4, 00:15:47.443 "num_base_bdevs_operational": 4, 00:15:47.443 "base_bdevs_list": [ 00:15:47.443 { 00:15:47.443 "name": "spare", 00:15:47.443 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:47.443 "is_configured": true, 00:15:47.443 "data_offset": 2048, 00:15:47.443 "data_size": 63488 00:15:47.443 }, 00:15:47.443 { 00:15:47.443 "name": "BaseBdev2", 00:15:47.443 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:47.443 "is_configured": true, 00:15:47.443 "data_offset": 2048, 00:15:47.443 "data_size": 63488 00:15:47.443 }, 00:15:47.443 { 00:15:47.443 "name": "BaseBdev3", 00:15:47.443 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:47.443 "is_configured": true, 00:15:47.443 "data_offset": 2048, 00:15:47.443 "data_size": 63488 00:15:47.443 }, 00:15:47.443 { 00:15:47.443 "name": "BaseBdev4", 00:15:47.443 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:47.443 "is_configured": true, 00:15:47.443 "data_offset": 2048, 00:15:47.443 "data_size": 63488 00:15:47.443 } 00:15:47.443 ] 00:15:47.443 }' 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.443 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.012 [2024-12-07 02:48:58.847866] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.012 [2024-12-07 02:48:58.847947] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.012 [2024-12-07 02:48:58.848102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.012 [2024-12-07 02:48:58.848207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.012 [2024-12-07 02:48:58.848225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:48.012 02:48:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:48.272 /dev/nbd0 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.272 1+0 records in 00:15:48.272 1+0 records out 00:15:48.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371506 s, 11.0 MB/s 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:48.272 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:48.531 /dev/nbd1 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.531 1+0 records in 00:15:48.531 1+0 records out 00:15:48.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522535 s, 7.8 MB/s 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.531 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:48.792 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:48.792 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:48.792 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:48.792 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.792 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.792 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:48.792 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:48.792 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.792 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.792 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.052 [2024-12-07 02:48:59.942638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:49.052 [2024-12-07 02:48:59.942701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.052 [2024-12-07 02:48:59.942738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:49.052 [2024-12-07 02:48:59.942754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.052 [2024-12-07 02:48:59.944993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.052 [2024-12-07 02:48:59.945048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:49.052 [2024-12-07 02:48:59.945143] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:49.052 [2024-12-07 02:48:59.945187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.052 [2024-12-07 02:48:59.945311] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:49.052 [2024-12-07 02:48:59.945422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.052 [2024-12-07 02:48:59.945490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:49.052 spare 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.052 02:48:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.052 [2024-12-07 02:49:00.045406] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:15:49.052 [2024-12-07 02:49:00.045436] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:49.052 [2024-12-07 02:49:00.045729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049030 00:15:49.052 [2024-12-07 02:49:00.046213] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:15:49.052 [2024-12-07 02:49:00.046235] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:15:49.052 [2024-12-07 02:49:00.046419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.052 "name": "raid_bdev1", 00:15:49.052 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:49.052 "strip_size_kb": 64, 00:15:49.052 "state": "online", 00:15:49.052 "raid_level": "raid5f", 00:15:49.052 "superblock": true, 00:15:49.052 "num_base_bdevs": 4, 00:15:49.052 "num_base_bdevs_discovered": 4, 00:15:49.052 "num_base_bdevs_operational": 4, 00:15:49.052 "base_bdevs_list": [ 00:15:49.052 { 00:15:49.052 "name": "spare", 00:15:49.052 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:49.052 "is_configured": true, 00:15:49.052 "data_offset": 2048, 00:15:49.052 "data_size": 63488 00:15:49.052 }, 00:15:49.052 { 00:15:49.052 "name": "BaseBdev2", 00:15:49.052 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:49.052 "is_configured": true, 00:15:49.052 "data_offset": 2048, 00:15:49.052 "data_size": 63488 00:15:49.052 }, 00:15:49.052 { 00:15:49.052 "name": "BaseBdev3", 00:15:49.052 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:49.052 "is_configured": true, 00:15:49.052 "data_offset": 2048, 00:15:49.052 "data_size": 63488 00:15:49.052 }, 00:15:49.052 { 00:15:49.052 "name": "BaseBdev4", 00:15:49.052 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:49.052 "is_configured": true, 00:15:49.052 "data_offset": 2048, 00:15:49.052 "data_size": 63488 00:15:49.052 } 00:15:49.052 ] 00:15:49.052 }' 00:15:49.052 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.053 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.621 "name": "raid_bdev1", 00:15:49.621 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:49.621 "strip_size_kb": 64, 00:15:49.621 "state": "online", 00:15:49.621 "raid_level": "raid5f", 00:15:49.621 "superblock": true, 00:15:49.621 "num_base_bdevs": 4, 00:15:49.621 "num_base_bdevs_discovered": 4, 00:15:49.621 "num_base_bdevs_operational": 4, 00:15:49.621 "base_bdevs_list": [ 00:15:49.621 { 00:15:49.621 "name": "spare", 00:15:49.621 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:49.621 "is_configured": true, 00:15:49.621 "data_offset": 2048, 00:15:49.621 "data_size": 63488 00:15:49.621 }, 00:15:49.621 { 00:15:49.621 "name": "BaseBdev2", 00:15:49.621 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:49.621 "is_configured": true, 00:15:49.621 "data_offset": 2048, 00:15:49.621 "data_size": 63488 00:15:49.621 }, 00:15:49.621 { 00:15:49.621 "name": "BaseBdev3", 00:15:49.621 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:49.621 "is_configured": true, 00:15:49.621 "data_offset": 2048, 00:15:49.621 "data_size": 63488 00:15:49.621 }, 00:15:49.621 { 00:15:49.621 "name": "BaseBdev4", 00:15:49.621 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:49.621 "is_configured": true, 00:15:49.621 "data_offset": 2048, 00:15:49.621 "data_size": 63488 00:15:49.621 } 00:15:49.621 ] 00:15:49.621 }' 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.621 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.881 [2024-12-07 02:49:00.705443] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.881 "name": "raid_bdev1", 00:15:49.881 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:49.881 "strip_size_kb": 64, 00:15:49.881 "state": "online", 00:15:49.881 "raid_level": "raid5f", 00:15:49.881 "superblock": true, 00:15:49.881 "num_base_bdevs": 4, 00:15:49.881 "num_base_bdevs_discovered": 3, 00:15:49.881 "num_base_bdevs_operational": 3, 00:15:49.881 "base_bdevs_list": [ 00:15:49.881 { 00:15:49.881 "name": null, 00:15:49.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.881 "is_configured": false, 00:15:49.881 "data_offset": 0, 00:15:49.881 "data_size": 63488 00:15:49.881 }, 00:15:49.881 { 00:15:49.881 "name": "BaseBdev2", 00:15:49.881 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:49.881 "is_configured": true, 00:15:49.881 "data_offset": 2048, 00:15:49.881 "data_size": 63488 00:15:49.881 }, 00:15:49.881 { 00:15:49.881 "name": "BaseBdev3", 00:15:49.881 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:49.881 "is_configured": true, 00:15:49.881 "data_offset": 2048, 00:15:49.881 "data_size": 63488 00:15:49.881 }, 00:15:49.881 { 00:15:49.881 "name": "BaseBdev4", 00:15:49.881 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:49.881 "is_configured": true, 00:15:49.881 "data_offset": 2048, 00:15:49.881 "data_size": 63488 00:15:49.881 } 00:15:49.881 ] 00:15:49.881 }' 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.881 02:49:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.140 02:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:50.140 02:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.140 02:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.140 [2024-12-07 02:49:01.168720] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:50.140 [2024-12-07 02:49:01.168897] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:50.140 [2024-12-07 02:49:01.168914] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:50.140 [2024-12-07 02:49:01.168961] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:50.140 [2024-12-07 02:49:01.172327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049100 00:15:50.140 [2024-12-07 02:49:01.174574] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:50.140 02:49:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.140 02:49:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.519 "name": "raid_bdev1", 00:15:51.519 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:51.519 "strip_size_kb": 64, 00:15:51.519 "state": "online", 00:15:51.519 "raid_level": "raid5f", 00:15:51.519 "superblock": true, 00:15:51.519 "num_base_bdevs": 4, 00:15:51.519 "num_base_bdevs_discovered": 4, 00:15:51.519 "num_base_bdevs_operational": 4, 00:15:51.519 "process": { 00:15:51.519 "type": "rebuild", 00:15:51.519 "target": "spare", 00:15:51.519 "progress": { 00:15:51.519 "blocks": 19200, 00:15:51.519 "percent": 10 00:15:51.519 } 00:15:51.519 }, 00:15:51.519 "base_bdevs_list": [ 00:15:51.519 { 00:15:51.519 "name": "spare", 00:15:51.519 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:51.519 "is_configured": true, 00:15:51.519 "data_offset": 2048, 00:15:51.519 "data_size": 63488 00:15:51.519 }, 00:15:51.519 { 00:15:51.519 "name": "BaseBdev2", 00:15:51.519 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:51.519 "is_configured": true, 00:15:51.519 "data_offset": 2048, 00:15:51.519 "data_size": 63488 00:15:51.519 }, 00:15:51.519 { 00:15:51.519 "name": "BaseBdev3", 00:15:51.519 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:51.519 "is_configured": true, 00:15:51.519 "data_offset": 2048, 00:15:51.519 "data_size": 63488 00:15:51.519 }, 00:15:51.519 { 00:15:51.519 "name": "BaseBdev4", 00:15:51.519 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:51.519 "is_configured": true, 00:15:51.519 "data_offset": 2048, 00:15:51.519 "data_size": 63488 00:15:51.519 } 00:15:51.519 ] 00:15:51.519 }' 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.519 [2024-12-07 02:49:02.333572] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.519 [2024-12-07 02:49:02.380754] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:51.519 [2024-12-07 02:49:02.380813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.519 [2024-12-07 02:49:02.380835] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:51.519 [2024-12-07 02:49:02.380843] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.519 "name": "raid_bdev1", 00:15:51.519 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:51.519 "strip_size_kb": 64, 00:15:51.519 "state": "online", 00:15:51.519 "raid_level": "raid5f", 00:15:51.519 "superblock": true, 00:15:51.519 "num_base_bdevs": 4, 00:15:51.519 "num_base_bdevs_discovered": 3, 00:15:51.519 "num_base_bdevs_operational": 3, 00:15:51.519 "base_bdevs_list": [ 00:15:51.519 { 00:15:51.519 "name": null, 00:15:51.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.519 "is_configured": false, 00:15:51.519 "data_offset": 0, 00:15:51.519 "data_size": 63488 00:15:51.519 }, 00:15:51.519 { 00:15:51.519 "name": "BaseBdev2", 00:15:51.519 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:51.519 "is_configured": true, 00:15:51.519 "data_offset": 2048, 00:15:51.519 "data_size": 63488 00:15:51.519 }, 00:15:51.519 { 00:15:51.519 "name": "BaseBdev3", 00:15:51.519 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:51.519 "is_configured": true, 00:15:51.519 "data_offset": 2048, 00:15:51.519 "data_size": 63488 00:15:51.519 }, 00:15:51.519 { 00:15:51.519 "name": "BaseBdev4", 00:15:51.519 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:51.519 "is_configured": true, 00:15:51.519 "data_offset": 2048, 00:15:51.519 "data_size": 63488 00:15:51.519 } 00:15:51.519 ] 00:15:51.519 }' 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.519 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.780 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:51.780 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.780 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.780 [2024-12-07 02:49:02.800954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:51.780 [2024-12-07 02:49:02.801059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.780 [2024-12-07 02:49:02.801106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:51.780 [2024-12-07 02:49:02.801138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.780 [2024-12-07 02:49:02.801640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.780 [2024-12-07 02:49:02.801707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:51.780 [2024-12-07 02:49:02.801824] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:51.780 [2024-12-07 02:49:02.801868] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:51.780 [2024-12-07 02:49:02.801926] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:51.780 [2024-12-07 02:49:02.801992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.780 [2024-12-07 02:49:02.804669] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:15:51.780 [2024-12-07 02:49:02.806873] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:51.780 spare 00:15:51.780 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.780 02:49:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:53.161 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.162 "name": "raid_bdev1", 00:15:53.162 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:53.162 "strip_size_kb": 64, 00:15:53.162 "state": "online", 00:15:53.162 "raid_level": "raid5f", 00:15:53.162 "superblock": true, 00:15:53.162 "num_base_bdevs": 4, 00:15:53.162 "num_base_bdevs_discovered": 4, 00:15:53.162 "num_base_bdevs_operational": 4, 00:15:53.162 "process": { 00:15:53.162 "type": "rebuild", 00:15:53.162 "target": "spare", 00:15:53.162 "progress": { 00:15:53.162 "blocks": 19200, 00:15:53.162 "percent": 10 00:15:53.162 } 00:15:53.162 }, 00:15:53.162 "base_bdevs_list": [ 00:15:53.162 { 00:15:53.162 "name": "spare", 00:15:53.162 "uuid": "c29342f4-8f0f-5b92-9468-9cbd8707ab2b", 00:15:53.162 "is_configured": true, 00:15:53.162 "data_offset": 2048, 00:15:53.162 "data_size": 63488 00:15:53.162 }, 00:15:53.162 { 00:15:53.162 "name": "BaseBdev2", 00:15:53.162 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:53.162 "is_configured": true, 00:15:53.162 "data_offset": 2048, 00:15:53.162 "data_size": 63488 00:15:53.162 }, 00:15:53.162 { 00:15:53.162 "name": "BaseBdev3", 00:15:53.162 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:53.162 "is_configured": true, 00:15:53.162 "data_offset": 2048, 00:15:53.162 "data_size": 63488 00:15:53.162 }, 00:15:53.162 { 00:15:53.162 "name": "BaseBdev4", 00:15:53.162 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:53.162 "is_configured": true, 00:15:53.162 "data_offset": 2048, 00:15:53.162 "data_size": 63488 00:15:53.162 } 00:15:53.162 ] 00:15:53.162 }' 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.162 02:49:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.162 [2024-12-07 02:49:03.969633] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.162 [2024-12-07 02:49:04.012083] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:53.162 [2024-12-07 02:49:04.012200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.162 [2024-12-07 02:49:04.012241] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.162 [2024-12-07 02:49:04.012268] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.162 "name": "raid_bdev1", 00:15:53.162 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:53.162 "strip_size_kb": 64, 00:15:53.162 "state": "online", 00:15:53.162 "raid_level": "raid5f", 00:15:53.162 "superblock": true, 00:15:53.162 "num_base_bdevs": 4, 00:15:53.162 "num_base_bdevs_discovered": 3, 00:15:53.162 "num_base_bdevs_operational": 3, 00:15:53.162 "base_bdevs_list": [ 00:15:53.162 { 00:15:53.162 "name": null, 00:15:53.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.162 "is_configured": false, 00:15:53.162 "data_offset": 0, 00:15:53.162 "data_size": 63488 00:15:53.162 }, 00:15:53.162 { 00:15:53.162 "name": "BaseBdev2", 00:15:53.162 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:53.162 "is_configured": true, 00:15:53.162 "data_offset": 2048, 00:15:53.162 "data_size": 63488 00:15:53.162 }, 00:15:53.162 { 00:15:53.162 "name": "BaseBdev3", 00:15:53.162 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:53.162 "is_configured": true, 00:15:53.162 "data_offset": 2048, 00:15:53.162 "data_size": 63488 00:15:53.162 }, 00:15:53.162 { 00:15:53.162 "name": "BaseBdev4", 00:15:53.162 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:53.162 "is_configured": true, 00:15:53.162 "data_offset": 2048, 00:15:53.162 "data_size": 63488 00:15:53.162 } 00:15:53.162 ] 00:15:53.162 }' 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.162 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.421 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.421 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.421 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.421 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.421 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.421 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.421 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.421 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.421 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.682 "name": "raid_bdev1", 00:15:53.682 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:53.682 "strip_size_kb": 64, 00:15:53.682 "state": "online", 00:15:53.682 "raid_level": "raid5f", 00:15:53.682 "superblock": true, 00:15:53.682 "num_base_bdevs": 4, 00:15:53.682 "num_base_bdevs_discovered": 3, 00:15:53.682 "num_base_bdevs_operational": 3, 00:15:53.682 "base_bdevs_list": [ 00:15:53.682 { 00:15:53.682 "name": null, 00:15:53.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.682 "is_configured": false, 00:15:53.682 "data_offset": 0, 00:15:53.682 "data_size": 63488 00:15:53.682 }, 00:15:53.682 { 00:15:53.682 "name": "BaseBdev2", 00:15:53.682 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:53.682 "is_configured": true, 00:15:53.682 "data_offset": 2048, 00:15:53.682 "data_size": 63488 00:15:53.682 }, 00:15:53.682 { 00:15:53.682 "name": "BaseBdev3", 00:15:53.682 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:53.682 "is_configured": true, 00:15:53.682 "data_offset": 2048, 00:15:53.682 "data_size": 63488 00:15:53.682 }, 00:15:53.682 { 00:15:53.682 "name": "BaseBdev4", 00:15:53.682 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:53.682 "is_configured": true, 00:15:53.682 "data_offset": 2048, 00:15:53.682 "data_size": 63488 00:15:53.682 } 00:15:53.682 ] 00:15:53.682 }' 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.682 [2024-12-07 02:49:04.659959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:53.682 [2024-12-07 02:49:04.660020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.682 [2024-12-07 02:49:04.660042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:53.682 [2024-12-07 02:49:04.660054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.682 [2024-12-07 02:49:04.660476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.682 [2024-12-07 02:49:04.660499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:53.682 [2024-12-07 02:49:04.660568] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:53.682 [2024-12-07 02:49:04.660632] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:53.682 [2024-12-07 02:49:04.660642] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:53.682 [2024-12-07 02:49:04.660655] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:53.682 BaseBdev1 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.682 02:49:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.619 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.877 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.877 "name": "raid_bdev1", 00:15:54.877 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:54.877 "strip_size_kb": 64, 00:15:54.878 "state": "online", 00:15:54.878 "raid_level": "raid5f", 00:15:54.878 "superblock": true, 00:15:54.878 "num_base_bdevs": 4, 00:15:54.878 "num_base_bdevs_discovered": 3, 00:15:54.878 "num_base_bdevs_operational": 3, 00:15:54.878 "base_bdevs_list": [ 00:15:54.878 { 00:15:54.878 "name": null, 00:15:54.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.878 "is_configured": false, 00:15:54.878 "data_offset": 0, 00:15:54.878 "data_size": 63488 00:15:54.878 }, 00:15:54.878 { 00:15:54.878 "name": "BaseBdev2", 00:15:54.878 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:54.878 "is_configured": true, 00:15:54.878 "data_offset": 2048, 00:15:54.878 "data_size": 63488 00:15:54.878 }, 00:15:54.878 { 00:15:54.878 "name": "BaseBdev3", 00:15:54.878 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:54.878 "is_configured": true, 00:15:54.878 "data_offset": 2048, 00:15:54.878 "data_size": 63488 00:15:54.878 }, 00:15:54.878 { 00:15:54.878 "name": "BaseBdev4", 00:15:54.878 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:54.878 "is_configured": true, 00:15:54.878 "data_offset": 2048, 00:15:54.878 "data_size": 63488 00:15:54.878 } 00:15:54.878 ] 00:15:54.878 }' 00:15:54.878 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.878 02:49:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.137 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:55.137 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.137 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:55.137 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:55.137 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.137 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.137 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.137 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.137 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.137 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.137 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.137 "name": "raid_bdev1", 00:15:55.137 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:55.137 "strip_size_kb": 64, 00:15:55.137 "state": "online", 00:15:55.137 "raid_level": "raid5f", 00:15:55.137 "superblock": true, 00:15:55.137 "num_base_bdevs": 4, 00:15:55.137 "num_base_bdevs_discovered": 3, 00:15:55.137 "num_base_bdevs_operational": 3, 00:15:55.137 "base_bdevs_list": [ 00:15:55.137 { 00:15:55.137 "name": null, 00:15:55.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.137 "is_configured": false, 00:15:55.137 "data_offset": 0, 00:15:55.137 "data_size": 63488 00:15:55.137 }, 00:15:55.137 { 00:15:55.137 "name": "BaseBdev2", 00:15:55.137 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:55.137 "is_configured": true, 00:15:55.137 "data_offset": 2048, 00:15:55.137 "data_size": 63488 00:15:55.137 }, 00:15:55.137 { 00:15:55.137 "name": "BaseBdev3", 00:15:55.137 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:55.137 "is_configured": true, 00:15:55.137 "data_offset": 2048, 00:15:55.137 "data_size": 63488 00:15:55.137 }, 00:15:55.137 { 00:15:55.137 "name": "BaseBdev4", 00:15:55.137 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:55.137 "is_configured": true, 00:15:55.137 "data_offset": 2048, 00:15:55.137 "data_size": 63488 00:15:55.137 } 00:15:55.137 ] 00:15:55.137 }' 00:15:55.137 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.396 [2024-12-07 02:49:06.321334] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.396 [2024-12-07 02:49:06.321528] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:55.396 [2024-12-07 02:49:06.321604] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:55.396 request: 00:15:55.396 { 00:15:55.396 "base_bdev": "BaseBdev1", 00:15:55.396 "raid_bdev": "raid_bdev1", 00:15:55.396 "method": "bdev_raid_add_base_bdev", 00:15:55.396 "req_id": 1 00:15:55.396 } 00:15:55.396 Got JSON-RPC error response 00:15:55.396 response: 00:15:55.396 { 00:15:55.396 "code": -22, 00:15:55.396 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:55.396 } 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:55.396 02:49:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:56.334 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.335 "name": "raid_bdev1", 00:15:56.335 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:56.335 "strip_size_kb": 64, 00:15:56.335 "state": "online", 00:15:56.335 "raid_level": "raid5f", 00:15:56.335 "superblock": true, 00:15:56.335 "num_base_bdevs": 4, 00:15:56.335 "num_base_bdevs_discovered": 3, 00:15:56.335 "num_base_bdevs_operational": 3, 00:15:56.335 "base_bdevs_list": [ 00:15:56.335 { 00:15:56.335 "name": null, 00:15:56.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.335 "is_configured": false, 00:15:56.335 "data_offset": 0, 00:15:56.335 "data_size": 63488 00:15:56.335 }, 00:15:56.335 { 00:15:56.335 "name": "BaseBdev2", 00:15:56.335 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:56.335 "is_configured": true, 00:15:56.335 "data_offset": 2048, 00:15:56.335 "data_size": 63488 00:15:56.335 }, 00:15:56.335 { 00:15:56.335 "name": "BaseBdev3", 00:15:56.335 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:56.335 "is_configured": true, 00:15:56.335 "data_offset": 2048, 00:15:56.335 "data_size": 63488 00:15:56.335 }, 00:15:56.335 { 00:15:56.335 "name": "BaseBdev4", 00:15:56.335 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:56.335 "is_configured": true, 00:15:56.335 "data_offset": 2048, 00:15:56.335 "data_size": 63488 00:15:56.335 } 00:15:56.335 ] 00:15:56.335 }' 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.335 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.902 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:56.902 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.902 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:56.902 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:56.902 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.902 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.903 "name": "raid_bdev1", 00:15:56.903 "uuid": "c8767a02-1b01-4be3-a01d-befe324e8db5", 00:15:56.903 "strip_size_kb": 64, 00:15:56.903 "state": "online", 00:15:56.903 "raid_level": "raid5f", 00:15:56.903 "superblock": true, 00:15:56.903 "num_base_bdevs": 4, 00:15:56.903 "num_base_bdevs_discovered": 3, 00:15:56.903 "num_base_bdevs_operational": 3, 00:15:56.903 "base_bdevs_list": [ 00:15:56.903 { 00:15:56.903 "name": null, 00:15:56.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.903 "is_configured": false, 00:15:56.903 "data_offset": 0, 00:15:56.903 "data_size": 63488 00:15:56.903 }, 00:15:56.903 { 00:15:56.903 "name": "BaseBdev2", 00:15:56.903 "uuid": "da4d4790-397d-5baa-83be-7f43ebdc8d89", 00:15:56.903 "is_configured": true, 00:15:56.903 "data_offset": 2048, 00:15:56.903 "data_size": 63488 00:15:56.903 }, 00:15:56.903 { 00:15:56.903 "name": "BaseBdev3", 00:15:56.903 "uuid": "2754067f-741c-5e81-bf9b-a8f6cbc7a663", 00:15:56.903 "is_configured": true, 00:15:56.903 "data_offset": 2048, 00:15:56.903 "data_size": 63488 00:15:56.903 }, 00:15:56.903 { 00:15:56.903 "name": "BaseBdev4", 00:15:56.903 "uuid": "c30ebc02-87d0-56a1-b398-23a83250ae8c", 00:15:56.903 "is_configured": true, 00:15:56.903 "data_offset": 2048, 00:15:56.903 "data_size": 63488 00:15:56.903 } 00:15:56.903 ] 00:15:56.903 }' 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95769 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95769 ']' 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95769 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.903 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95769 00:15:57.163 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.163 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.163 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95769' 00:15:57.163 killing process with pid 95769 00:15:57.163 Received shutdown signal, test time was about 60.000000 seconds 00:15:57.163 00:15:57.163 Latency(us) 00:15:57.163 [2024-12-07T02:49:08.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.163 [2024-12-07T02:49:08.241Z] =================================================================================================================== 00:15:57.163 [2024-12-07T02:49:08.241Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:57.163 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95769 00:15:57.163 [2024-12-07 02:49:07.991467] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.163 [2024-12-07 02:49:07.991576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.163 02:49:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95769 00:15:57.163 [2024-12-07 02:49:07.991663] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.163 [2024-12-07 02:49:07.991674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:15:57.163 [2024-12-07 02:49:08.042856] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.423 02:49:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:57.423 00:15:57.423 real 0m25.383s 00:15:57.423 user 0m32.173s 00:15:57.423 sys 0m3.230s 00:15:57.423 ************************************ 00:15:57.423 END TEST raid5f_rebuild_test_sb 00:15:57.423 ************************************ 00:15:57.423 02:49:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.423 02:49:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.423 02:49:08 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:15:57.423 02:49:08 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:15:57.423 02:49:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:57.423 02:49:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.423 02:49:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.423 ************************************ 00:15:57.423 START TEST raid_state_function_test_sb_4k 00:15:57.423 ************************************ 00:15:57.423 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:57.423 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:57.423 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:57.423 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:57.423 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:57.423 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=96567 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 96567' 00:15:57.424 Process raid pid: 96567 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 96567 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96567 ']' 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.424 02:49:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:57.424 [2024-12-07 02:49:08.459460] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:15:57.424 [2024-12-07 02:49:08.459618] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.683 [2024-12-07 02:49:08.627952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.683 [2024-12-07 02:49:08.676283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.683 [2024-12-07 02:49:08.719718] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.684 [2024-12-07 02:49:08.719763] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.254 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.254 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:58.254 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:58.254 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.254 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.254 [2024-12-07 02:49:09.273773] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.254 [2024-12-07 02:49:09.273836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.254 [2024-12-07 02:49:09.273858] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.254 [2024-12-07 02:49:09.273871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.254 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.254 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:58.254 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.254 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.255 "name": "Existed_Raid", 00:15:58.255 "uuid": "3cad5e8f-ca02-4064-9587-0d0fe6c28770", 00:15:58.255 "strip_size_kb": 0, 00:15:58.255 "state": "configuring", 00:15:58.255 "raid_level": "raid1", 00:15:58.255 "superblock": true, 00:15:58.255 "num_base_bdevs": 2, 00:15:58.255 "num_base_bdevs_discovered": 0, 00:15:58.255 "num_base_bdevs_operational": 2, 00:15:58.255 "base_bdevs_list": [ 00:15:58.255 { 00:15:58.255 "name": "BaseBdev1", 00:15:58.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.255 "is_configured": false, 00:15:58.255 "data_offset": 0, 00:15:58.255 "data_size": 0 00:15:58.255 }, 00:15:58.255 { 00:15:58.255 "name": "BaseBdev2", 00:15:58.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.255 "is_configured": false, 00:15:58.255 "data_offset": 0, 00:15:58.255 "data_size": 0 00:15:58.255 } 00:15:58.255 ] 00:15:58.255 }' 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.255 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.825 [2024-12-07 02:49:09.764850] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.825 [2024-12-07 02:49:09.764971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.825 [2024-12-07 02:49:09.776875] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.825 [2024-12-07 02:49:09.776958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.825 [2024-12-07 02:49:09.776987] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.825 [2024-12-07 02:49:09.777013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.825 [2024-12-07 02:49:09.797823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.825 BaseBdev1 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.825 [ 00:15:58.825 { 00:15:58.825 "name": "BaseBdev1", 00:15:58.825 "aliases": [ 00:15:58.825 "66588ad3-1af1-4794-aad8-0200155c5ccd" 00:15:58.825 ], 00:15:58.825 "product_name": "Malloc disk", 00:15:58.825 "block_size": 4096, 00:15:58.825 "num_blocks": 8192, 00:15:58.825 "uuid": "66588ad3-1af1-4794-aad8-0200155c5ccd", 00:15:58.825 "assigned_rate_limits": { 00:15:58.825 "rw_ios_per_sec": 0, 00:15:58.825 "rw_mbytes_per_sec": 0, 00:15:58.825 "r_mbytes_per_sec": 0, 00:15:58.825 "w_mbytes_per_sec": 0 00:15:58.825 }, 00:15:58.825 "claimed": true, 00:15:58.825 "claim_type": "exclusive_write", 00:15:58.825 "zoned": false, 00:15:58.825 "supported_io_types": { 00:15:58.825 "read": true, 00:15:58.825 "write": true, 00:15:58.825 "unmap": true, 00:15:58.825 "flush": true, 00:15:58.825 "reset": true, 00:15:58.825 "nvme_admin": false, 00:15:58.825 "nvme_io": false, 00:15:58.825 "nvme_io_md": false, 00:15:58.825 "write_zeroes": true, 00:15:58.825 "zcopy": true, 00:15:58.825 "get_zone_info": false, 00:15:58.825 "zone_management": false, 00:15:58.825 "zone_append": false, 00:15:58.825 "compare": false, 00:15:58.825 "compare_and_write": false, 00:15:58.825 "abort": true, 00:15:58.825 "seek_hole": false, 00:15:58.825 "seek_data": false, 00:15:58.825 "copy": true, 00:15:58.825 "nvme_iov_md": false 00:15:58.825 }, 00:15:58.825 "memory_domains": [ 00:15:58.825 { 00:15:58.825 "dma_device_id": "system", 00:15:58.825 "dma_device_type": 1 00:15:58.825 }, 00:15:58.825 { 00:15:58.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.825 "dma_device_type": 2 00:15:58.825 } 00:15:58.825 ], 00:15:58.825 "driver_specific": {} 00:15:58.825 } 00:15:58.825 ] 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.825 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.825 "name": "Existed_Raid", 00:15:58.825 "uuid": "7a52bb75-0311-4d68-8c68-ff536f17524e", 00:15:58.825 "strip_size_kb": 0, 00:15:58.825 "state": "configuring", 00:15:58.825 "raid_level": "raid1", 00:15:58.825 "superblock": true, 00:15:58.825 "num_base_bdevs": 2, 00:15:58.825 "num_base_bdevs_discovered": 1, 00:15:58.825 "num_base_bdevs_operational": 2, 00:15:58.825 "base_bdevs_list": [ 00:15:58.825 { 00:15:58.825 "name": "BaseBdev1", 00:15:58.825 "uuid": "66588ad3-1af1-4794-aad8-0200155c5ccd", 00:15:58.825 "is_configured": true, 00:15:58.825 "data_offset": 256, 00:15:58.825 "data_size": 7936 00:15:58.825 }, 00:15:58.825 { 00:15:58.826 "name": "BaseBdev2", 00:15:58.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.826 "is_configured": false, 00:15:58.826 "data_offset": 0, 00:15:58.826 "data_size": 0 00:15:58.826 } 00:15:58.826 ] 00:15:58.826 }' 00:15:58.826 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.826 02:49:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.395 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:59.395 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.395 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.395 [2024-12-07 02:49:10.261044] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:59.395 [2024-12-07 02:49:10.261142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:15:59.395 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.395 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:59.395 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.395 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.395 [2024-12-07 02:49:10.273059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.396 [2024-12-07 02:49:10.274795] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.396 [2024-12-07 02:49:10.274843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.396 "name": "Existed_Raid", 00:15:59.396 "uuid": "3ead9d93-7582-4a5d-ae2e-4f167f5349e9", 00:15:59.396 "strip_size_kb": 0, 00:15:59.396 "state": "configuring", 00:15:59.396 "raid_level": "raid1", 00:15:59.396 "superblock": true, 00:15:59.396 "num_base_bdevs": 2, 00:15:59.396 "num_base_bdevs_discovered": 1, 00:15:59.396 "num_base_bdevs_operational": 2, 00:15:59.396 "base_bdevs_list": [ 00:15:59.396 { 00:15:59.396 "name": "BaseBdev1", 00:15:59.396 "uuid": "66588ad3-1af1-4794-aad8-0200155c5ccd", 00:15:59.396 "is_configured": true, 00:15:59.396 "data_offset": 256, 00:15:59.396 "data_size": 7936 00:15:59.396 }, 00:15:59.396 { 00:15:59.396 "name": "BaseBdev2", 00:15:59.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.396 "is_configured": false, 00:15:59.396 "data_offset": 0, 00:15:59.396 "data_size": 0 00:15:59.396 } 00:15:59.396 ] 00:15:59.396 }' 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.396 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.965 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:59.965 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.965 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.965 [2024-12-07 02:49:10.758001] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.965 [2024-12-07 02:49:10.758766] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:15:59.965 [2024-12-07 02:49:10.758940] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:59.965 BaseBdev2 00:15:59.965 [2024-12-07 02:49:10.759988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:59.965 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.965 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:59.965 [2024-12-07 02:49:10.760671] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:15:59.965 [2024-12-07 02:49:10.760782] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:15:59.965 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:59.965 [2024-12-07 02:49:10.761212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.965 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:59.965 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:59.965 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:59.965 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:59.965 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:59.965 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.966 [ 00:15:59.966 { 00:15:59.966 "name": "BaseBdev2", 00:15:59.966 "aliases": [ 00:15:59.966 "c008d15b-2505-463f-adbd-1708527b241d" 00:15:59.966 ], 00:15:59.966 "product_name": "Malloc disk", 00:15:59.966 "block_size": 4096, 00:15:59.966 "num_blocks": 8192, 00:15:59.966 "uuid": "c008d15b-2505-463f-adbd-1708527b241d", 00:15:59.966 "assigned_rate_limits": { 00:15:59.966 "rw_ios_per_sec": 0, 00:15:59.966 "rw_mbytes_per_sec": 0, 00:15:59.966 "r_mbytes_per_sec": 0, 00:15:59.966 "w_mbytes_per_sec": 0 00:15:59.966 }, 00:15:59.966 "claimed": true, 00:15:59.966 "claim_type": "exclusive_write", 00:15:59.966 "zoned": false, 00:15:59.966 "supported_io_types": { 00:15:59.966 "read": true, 00:15:59.966 "write": true, 00:15:59.966 "unmap": true, 00:15:59.966 "flush": true, 00:15:59.966 "reset": true, 00:15:59.966 "nvme_admin": false, 00:15:59.966 "nvme_io": false, 00:15:59.966 "nvme_io_md": false, 00:15:59.966 "write_zeroes": true, 00:15:59.966 "zcopy": true, 00:15:59.966 "get_zone_info": false, 00:15:59.966 "zone_management": false, 00:15:59.966 "zone_append": false, 00:15:59.966 "compare": false, 00:15:59.966 "compare_and_write": false, 00:15:59.966 "abort": true, 00:15:59.966 "seek_hole": false, 00:15:59.966 "seek_data": false, 00:15:59.966 "copy": true, 00:15:59.966 "nvme_iov_md": false 00:15:59.966 }, 00:15:59.966 "memory_domains": [ 00:15:59.966 { 00:15:59.966 "dma_device_id": "system", 00:15:59.966 "dma_device_type": 1 00:15:59.966 }, 00:15:59.966 { 00:15:59.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.966 "dma_device_type": 2 00:15:59.966 } 00:15:59.966 ], 00:15:59.966 "driver_specific": {} 00:15:59.966 } 00:15:59.966 ] 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.966 "name": "Existed_Raid", 00:15:59.966 "uuid": "3ead9d93-7582-4a5d-ae2e-4f167f5349e9", 00:15:59.966 "strip_size_kb": 0, 00:15:59.966 "state": "online", 00:15:59.966 "raid_level": "raid1", 00:15:59.966 "superblock": true, 00:15:59.966 "num_base_bdevs": 2, 00:15:59.966 "num_base_bdevs_discovered": 2, 00:15:59.966 "num_base_bdevs_operational": 2, 00:15:59.966 "base_bdevs_list": [ 00:15:59.966 { 00:15:59.966 "name": "BaseBdev1", 00:15:59.966 "uuid": "66588ad3-1af1-4794-aad8-0200155c5ccd", 00:15:59.966 "is_configured": true, 00:15:59.966 "data_offset": 256, 00:15:59.966 "data_size": 7936 00:15:59.966 }, 00:15:59.966 { 00:15:59.966 "name": "BaseBdev2", 00:15:59.966 "uuid": "c008d15b-2505-463f-adbd-1708527b241d", 00:15:59.966 "is_configured": true, 00:15:59.966 "data_offset": 256, 00:15:59.966 "data_size": 7936 00:15:59.966 } 00:15:59.966 ] 00:15:59.966 }' 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.966 02:49:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.226 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:00.226 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:00.226 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:00.226 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:00.226 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:00.226 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:00.226 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:00.226 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:00.226 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.226 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.226 [2024-12-07 02:49:11.249414] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.226 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.226 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:00.226 "name": "Existed_Raid", 00:16:00.226 "aliases": [ 00:16:00.226 "3ead9d93-7582-4a5d-ae2e-4f167f5349e9" 00:16:00.226 ], 00:16:00.226 "product_name": "Raid Volume", 00:16:00.226 "block_size": 4096, 00:16:00.226 "num_blocks": 7936, 00:16:00.226 "uuid": "3ead9d93-7582-4a5d-ae2e-4f167f5349e9", 00:16:00.226 "assigned_rate_limits": { 00:16:00.226 "rw_ios_per_sec": 0, 00:16:00.226 "rw_mbytes_per_sec": 0, 00:16:00.226 "r_mbytes_per_sec": 0, 00:16:00.226 "w_mbytes_per_sec": 0 00:16:00.226 }, 00:16:00.226 "claimed": false, 00:16:00.226 "zoned": false, 00:16:00.226 "supported_io_types": { 00:16:00.226 "read": true, 00:16:00.226 "write": true, 00:16:00.226 "unmap": false, 00:16:00.226 "flush": false, 00:16:00.226 "reset": true, 00:16:00.226 "nvme_admin": false, 00:16:00.226 "nvme_io": false, 00:16:00.226 "nvme_io_md": false, 00:16:00.226 "write_zeroes": true, 00:16:00.226 "zcopy": false, 00:16:00.226 "get_zone_info": false, 00:16:00.226 "zone_management": false, 00:16:00.226 "zone_append": false, 00:16:00.226 "compare": false, 00:16:00.226 "compare_and_write": false, 00:16:00.226 "abort": false, 00:16:00.226 "seek_hole": false, 00:16:00.226 "seek_data": false, 00:16:00.226 "copy": false, 00:16:00.226 "nvme_iov_md": false 00:16:00.226 }, 00:16:00.226 "memory_domains": [ 00:16:00.226 { 00:16:00.226 "dma_device_id": "system", 00:16:00.226 "dma_device_type": 1 00:16:00.226 }, 00:16:00.226 { 00:16:00.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.226 "dma_device_type": 2 00:16:00.226 }, 00:16:00.226 { 00:16:00.226 "dma_device_id": "system", 00:16:00.226 "dma_device_type": 1 00:16:00.226 }, 00:16:00.226 { 00:16:00.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.226 "dma_device_type": 2 00:16:00.226 } 00:16:00.226 ], 00:16:00.226 "driver_specific": { 00:16:00.226 "raid": { 00:16:00.226 "uuid": "3ead9d93-7582-4a5d-ae2e-4f167f5349e9", 00:16:00.226 "strip_size_kb": 0, 00:16:00.226 "state": "online", 00:16:00.226 "raid_level": "raid1", 00:16:00.226 "superblock": true, 00:16:00.226 "num_base_bdevs": 2, 00:16:00.226 "num_base_bdevs_discovered": 2, 00:16:00.226 "num_base_bdevs_operational": 2, 00:16:00.226 "base_bdevs_list": [ 00:16:00.226 { 00:16:00.226 "name": "BaseBdev1", 00:16:00.226 "uuid": "66588ad3-1af1-4794-aad8-0200155c5ccd", 00:16:00.226 "is_configured": true, 00:16:00.226 "data_offset": 256, 00:16:00.226 "data_size": 7936 00:16:00.226 }, 00:16:00.226 { 00:16:00.226 "name": "BaseBdev2", 00:16:00.226 "uuid": "c008d15b-2505-463f-adbd-1708527b241d", 00:16:00.226 "is_configured": true, 00:16:00.226 "data_offset": 256, 00:16:00.226 "data_size": 7936 00:16:00.226 } 00:16:00.226 ] 00:16:00.226 } 00:16:00.226 } 00:16:00.226 }' 00:16:00.226 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:00.487 BaseBdev2' 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.487 [2024-12-07 02:49:11.464838] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.487 "name": "Existed_Raid", 00:16:00.487 "uuid": "3ead9d93-7582-4a5d-ae2e-4f167f5349e9", 00:16:00.487 "strip_size_kb": 0, 00:16:00.487 "state": "online", 00:16:00.487 "raid_level": "raid1", 00:16:00.487 "superblock": true, 00:16:00.487 "num_base_bdevs": 2, 00:16:00.487 "num_base_bdevs_discovered": 1, 00:16:00.487 "num_base_bdevs_operational": 1, 00:16:00.487 "base_bdevs_list": [ 00:16:00.487 { 00:16:00.487 "name": null, 00:16:00.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.487 "is_configured": false, 00:16:00.487 "data_offset": 0, 00:16:00.487 "data_size": 7936 00:16:00.487 }, 00:16:00.487 { 00:16:00.487 "name": "BaseBdev2", 00:16:00.487 "uuid": "c008d15b-2505-463f-adbd-1708527b241d", 00:16:00.487 "is_configured": true, 00:16:00.487 "data_offset": 256, 00:16:00.487 "data_size": 7936 00:16:00.487 } 00:16:00.487 ] 00:16:00.487 }' 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.487 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.058 [2024-12-07 02:49:11.963550] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:01.058 [2024-12-07 02:49:11.963664] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.058 [2024-12-07 02:49:11.975376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.058 [2024-12-07 02:49:11.975431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.058 [2024-12-07 02:49:11.975444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.058 02:49:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 96567 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96567 ']' 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96567 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96567 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:01.058 killing process with pid 96567 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96567' 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96567 00:16:01.058 [2024-12-07 02:49:12.073981] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.058 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96567 00:16:01.058 [2024-12-07 02:49:12.074975] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.318 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:01.318 00:16:01.318 real 0m3.961s 00:16:01.318 user 0m6.159s 00:16:01.318 sys 0m0.887s 00:16:01.318 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:01.318 02:49:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.318 ************************************ 00:16:01.318 END TEST raid_state_function_test_sb_4k 00:16:01.318 ************************************ 00:16:01.318 02:49:12 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:01.318 02:49:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:01.318 02:49:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:01.318 02:49:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.318 ************************************ 00:16:01.318 START TEST raid_superblock_test_4k 00:16:01.318 ************************************ 00:16:01.318 02:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96802 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96802 00:16:01.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96802 ']' 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.579 02:49:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:01.579 [2024-12-07 02:49:12.490456] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:01.579 [2024-12-07 02:49:12.490676] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96802 ] 00:16:01.579 [2024-12-07 02:49:12.642231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.839 [2024-12-07 02:49:12.686510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.839 [2024-12-07 02:49:12.730241] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.839 [2024-12-07 02:49:12.730369] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.408 malloc1 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.408 [2024-12-07 02:49:13.309352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.408 [2024-12-07 02:49:13.309479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.408 [2024-12-07 02:49:13.309517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:02.408 [2024-12-07 02:49:13.309555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.408 [2024-12-07 02:49:13.311699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.408 [2024-12-07 02:49:13.311783] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.408 pt1 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.408 malloc2 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.408 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.408 [2024-12-07 02:49:13.357431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:02.408 [2024-12-07 02:49:13.357670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.408 [2024-12-07 02:49:13.357767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:02.408 [2024-12-07 02:49:13.357876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.409 [2024-12-07 02:49:13.362916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.409 [2024-12-07 02:49:13.363089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:02.409 pt2 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.409 [2024-12-07 02:49:13.371476] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:02.409 [2024-12-07 02:49:13.374345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.409 [2024-12-07 02:49:13.374517] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:02.409 [2024-12-07 02:49:13.374544] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:02.409 [2024-12-07 02:49:13.374863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:02.409 [2024-12-07 02:49:13.375039] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:02.409 [2024-12-07 02:49:13.375053] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:02.409 [2024-12-07 02:49:13.375203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.409 "name": "raid_bdev1", 00:16:02.409 "uuid": "412018b3-3b00-4f6b-b888-70ab5e0388ba", 00:16:02.409 "strip_size_kb": 0, 00:16:02.409 "state": "online", 00:16:02.409 "raid_level": "raid1", 00:16:02.409 "superblock": true, 00:16:02.409 "num_base_bdevs": 2, 00:16:02.409 "num_base_bdevs_discovered": 2, 00:16:02.409 "num_base_bdevs_operational": 2, 00:16:02.409 "base_bdevs_list": [ 00:16:02.409 { 00:16:02.409 "name": "pt1", 00:16:02.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.409 "is_configured": true, 00:16:02.409 "data_offset": 256, 00:16:02.409 "data_size": 7936 00:16:02.409 }, 00:16:02.409 { 00:16:02.409 "name": "pt2", 00:16:02.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.409 "is_configured": true, 00:16:02.409 "data_offset": 256, 00:16:02.409 "data_size": 7936 00:16:02.409 } 00:16:02.409 ] 00:16:02.409 }' 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.409 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.979 [2024-12-07 02:49:13.798872] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:02.979 "name": "raid_bdev1", 00:16:02.979 "aliases": [ 00:16:02.979 "412018b3-3b00-4f6b-b888-70ab5e0388ba" 00:16:02.979 ], 00:16:02.979 "product_name": "Raid Volume", 00:16:02.979 "block_size": 4096, 00:16:02.979 "num_blocks": 7936, 00:16:02.979 "uuid": "412018b3-3b00-4f6b-b888-70ab5e0388ba", 00:16:02.979 "assigned_rate_limits": { 00:16:02.979 "rw_ios_per_sec": 0, 00:16:02.979 "rw_mbytes_per_sec": 0, 00:16:02.979 "r_mbytes_per_sec": 0, 00:16:02.979 "w_mbytes_per_sec": 0 00:16:02.979 }, 00:16:02.979 "claimed": false, 00:16:02.979 "zoned": false, 00:16:02.979 "supported_io_types": { 00:16:02.979 "read": true, 00:16:02.979 "write": true, 00:16:02.979 "unmap": false, 00:16:02.979 "flush": false, 00:16:02.979 "reset": true, 00:16:02.979 "nvme_admin": false, 00:16:02.979 "nvme_io": false, 00:16:02.979 "nvme_io_md": false, 00:16:02.979 "write_zeroes": true, 00:16:02.979 "zcopy": false, 00:16:02.979 "get_zone_info": false, 00:16:02.979 "zone_management": false, 00:16:02.979 "zone_append": false, 00:16:02.979 "compare": false, 00:16:02.979 "compare_and_write": false, 00:16:02.979 "abort": false, 00:16:02.979 "seek_hole": false, 00:16:02.979 "seek_data": false, 00:16:02.979 "copy": false, 00:16:02.979 "nvme_iov_md": false 00:16:02.979 }, 00:16:02.979 "memory_domains": [ 00:16:02.979 { 00:16:02.979 "dma_device_id": "system", 00:16:02.979 "dma_device_type": 1 00:16:02.979 }, 00:16:02.979 { 00:16:02.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.979 "dma_device_type": 2 00:16:02.979 }, 00:16:02.979 { 00:16:02.979 "dma_device_id": "system", 00:16:02.979 "dma_device_type": 1 00:16:02.979 }, 00:16:02.979 { 00:16:02.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.979 "dma_device_type": 2 00:16:02.979 } 00:16:02.979 ], 00:16:02.979 "driver_specific": { 00:16:02.979 "raid": { 00:16:02.979 "uuid": "412018b3-3b00-4f6b-b888-70ab5e0388ba", 00:16:02.979 "strip_size_kb": 0, 00:16:02.979 "state": "online", 00:16:02.979 "raid_level": "raid1", 00:16:02.979 "superblock": true, 00:16:02.979 "num_base_bdevs": 2, 00:16:02.979 "num_base_bdevs_discovered": 2, 00:16:02.979 "num_base_bdevs_operational": 2, 00:16:02.979 "base_bdevs_list": [ 00:16:02.979 { 00:16:02.979 "name": "pt1", 00:16:02.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.979 "is_configured": true, 00:16:02.979 "data_offset": 256, 00:16:02.979 "data_size": 7936 00:16:02.979 }, 00:16:02.979 { 00:16:02.979 "name": "pt2", 00:16:02.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.979 "is_configured": true, 00:16:02.979 "data_offset": 256, 00:16:02.979 "data_size": 7936 00:16:02.979 } 00:16:02.979 ] 00:16:02.979 } 00:16:02.979 } 00:16:02.979 }' 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:02.979 pt2' 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.979 02:49:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.979 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.979 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:02.979 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:02.979 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:02.979 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.979 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.979 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:02.979 [2024-12-07 02:49:14.046373] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=412018b3-3b00-4f6b-b888-70ab5e0388ba 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 412018b3-3b00-4f6b-b888-70ab5e0388ba ']' 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 [2024-12-07 02:49:14.090087] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.240 [2024-12-07 02:49:14.090161] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.240 [2024-12-07 02:49:14.090258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.240 [2024-12-07 02:49:14.090348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.240 [2024-12-07 02:49:14.090403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:03.240 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.241 [2024-12-07 02:49:14.225877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:03.241 [2024-12-07 02:49:14.227714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:03.241 [2024-12-07 02:49:14.227806] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:03.241 [2024-12-07 02:49:14.227908] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:03.241 [2024-12-07 02:49:14.227995] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.241 [2024-12-07 02:49:14.228041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:03.241 request: 00:16:03.241 { 00:16:03.241 "name": "raid_bdev1", 00:16:03.241 "raid_level": "raid1", 00:16:03.241 "base_bdevs": [ 00:16:03.241 "malloc1", 00:16:03.241 "malloc2" 00:16:03.241 ], 00:16:03.241 "superblock": false, 00:16:03.241 "method": "bdev_raid_create", 00:16:03.241 "req_id": 1 00:16:03.241 } 00:16:03.241 Got JSON-RPC error response 00:16:03.241 response: 00:16:03.241 { 00:16:03.241 "code": -17, 00:16:03.241 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:03.241 } 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.241 [2024-12-07 02:49:14.293755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:03.241 [2024-12-07 02:49:14.293805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.241 [2024-12-07 02:49:14.293825] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:03.241 [2024-12-07 02:49:14.293836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.241 [2024-12-07 02:49:14.296021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.241 [2024-12-07 02:49:14.296057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:03.241 [2024-12-07 02:49:14.296126] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:03.241 [2024-12-07 02:49:14.296172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:03.241 pt1 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.241 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.501 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.501 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.501 "name": "raid_bdev1", 00:16:03.501 "uuid": "412018b3-3b00-4f6b-b888-70ab5e0388ba", 00:16:03.501 "strip_size_kb": 0, 00:16:03.501 "state": "configuring", 00:16:03.501 "raid_level": "raid1", 00:16:03.501 "superblock": true, 00:16:03.501 "num_base_bdevs": 2, 00:16:03.501 "num_base_bdevs_discovered": 1, 00:16:03.501 "num_base_bdevs_operational": 2, 00:16:03.501 "base_bdevs_list": [ 00:16:03.501 { 00:16:03.501 "name": "pt1", 00:16:03.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.501 "is_configured": true, 00:16:03.501 "data_offset": 256, 00:16:03.501 "data_size": 7936 00:16:03.501 }, 00:16:03.501 { 00:16:03.501 "name": null, 00:16:03.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.501 "is_configured": false, 00:16:03.501 "data_offset": 256, 00:16:03.501 "data_size": 7936 00:16:03.501 } 00:16:03.501 ] 00:16:03.501 }' 00:16:03.501 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.501 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.761 [2024-12-07 02:49:14.697167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.761 [2024-12-07 02:49:14.697219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.761 [2024-12-07 02:49:14.697238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:03.761 [2024-12-07 02:49:14.697248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.761 [2024-12-07 02:49:14.697573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.761 [2024-12-07 02:49:14.697606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.761 [2024-12-07 02:49:14.697662] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:03.761 [2024-12-07 02:49:14.697681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.761 [2024-12-07 02:49:14.697760] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:03.761 [2024-12-07 02:49:14.697769] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:03.761 [2024-12-07 02:49:14.697988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:03.761 [2024-12-07 02:49:14.698101] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:03.761 [2024-12-07 02:49:14.698116] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:03.761 [2024-12-07 02:49:14.698208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.761 pt2 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.761 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.761 "name": "raid_bdev1", 00:16:03.761 "uuid": "412018b3-3b00-4f6b-b888-70ab5e0388ba", 00:16:03.761 "strip_size_kb": 0, 00:16:03.761 "state": "online", 00:16:03.761 "raid_level": "raid1", 00:16:03.761 "superblock": true, 00:16:03.761 "num_base_bdevs": 2, 00:16:03.761 "num_base_bdevs_discovered": 2, 00:16:03.761 "num_base_bdevs_operational": 2, 00:16:03.761 "base_bdevs_list": [ 00:16:03.761 { 00:16:03.761 "name": "pt1", 00:16:03.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.761 "is_configured": true, 00:16:03.761 "data_offset": 256, 00:16:03.761 "data_size": 7936 00:16:03.761 }, 00:16:03.761 { 00:16:03.761 "name": "pt2", 00:16:03.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.761 "is_configured": true, 00:16:03.761 "data_offset": 256, 00:16:03.761 "data_size": 7936 00:16:03.761 } 00:16:03.761 ] 00:16:03.761 }' 00:16:03.762 02:49:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.762 02:49:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.332 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:04.332 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:04.332 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:04.332 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:04.332 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:04.332 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:04.332 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:04.332 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.332 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.332 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.332 [2024-12-07 02:49:15.136670] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.332 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.332 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:04.332 "name": "raid_bdev1", 00:16:04.332 "aliases": [ 00:16:04.332 "412018b3-3b00-4f6b-b888-70ab5e0388ba" 00:16:04.332 ], 00:16:04.332 "product_name": "Raid Volume", 00:16:04.332 "block_size": 4096, 00:16:04.332 "num_blocks": 7936, 00:16:04.332 "uuid": "412018b3-3b00-4f6b-b888-70ab5e0388ba", 00:16:04.332 "assigned_rate_limits": { 00:16:04.332 "rw_ios_per_sec": 0, 00:16:04.332 "rw_mbytes_per_sec": 0, 00:16:04.332 "r_mbytes_per_sec": 0, 00:16:04.332 "w_mbytes_per_sec": 0 00:16:04.332 }, 00:16:04.333 "claimed": false, 00:16:04.333 "zoned": false, 00:16:04.333 "supported_io_types": { 00:16:04.333 "read": true, 00:16:04.333 "write": true, 00:16:04.333 "unmap": false, 00:16:04.333 "flush": false, 00:16:04.333 "reset": true, 00:16:04.333 "nvme_admin": false, 00:16:04.333 "nvme_io": false, 00:16:04.333 "nvme_io_md": false, 00:16:04.333 "write_zeroes": true, 00:16:04.333 "zcopy": false, 00:16:04.333 "get_zone_info": false, 00:16:04.333 "zone_management": false, 00:16:04.333 "zone_append": false, 00:16:04.333 "compare": false, 00:16:04.333 "compare_and_write": false, 00:16:04.333 "abort": false, 00:16:04.333 "seek_hole": false, 00:16:04.333 "seek_data": false, 00:16:04.333 "copy": false, 00:16:04.333 "nvme_iov_md": false 00:16:04.333 }, 00:16:04.333 "memory_domains": [ 00:16:04.333 { 00:16:04.333 "dma_device_id": "system", 00:16:04.333 "dma_device_type": 1 00:16:04.333 }, 00:16:04.333 { 00:16:04.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.333 "dma_device_type": 2 00:16:04.333 }, 00:16:04.333 { 00:16:04.333 "dma_device_id": "system", 00:16:04.333 "dma_device_type": 1 00:16:04.333 }, 00:16:04.333 { 00:16:04.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.333 "dma_device_type": 2 00:16:04.333 } 00:16:04.333 ], 00:16:04.333 "driver_specific": { 00:16:04.333 "raid": { 00:16:04.333 "uuid": "412018b3-3b00-4f6b-b888-70ab5e0388ba", 00:16:04.333 "strip_size_kb": 0, 00:16:04.333 "state": "online", 00:16:04.333 "raid_level": "raid1", 00:16:04.333 "superblock": true, 00:16:04.333 "num_base_bdevs": 2, 00:16:04.333 "num_base_bdevs_discovered": 2, 00:16:04.333 "num_base_bdevs_operational": 2, 00:16:04.333 "base_bdevs_list": [ 00:16:04.333 { 00:16:04.333 "name": "pt1", 00:16:04.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.333 "is_configured": true, 00:16:04.333 "data_offset": 256, 00:16:04.333 "data_size": 7936 00:16:04.333 }, 00:16:04.333 { 00:16:04.333 "name": "pt2", 00:16:04.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.333 "is_configured": true, 00:16:04.333 "data_offset": 256, 00:16:04.333 "data_size": 7936 00:16:04.333 } 00:16:04.333 ] 00:16:04.333 } 00:16:04.333 } 00:16:04.333 }' 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:04.333 pt2' 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.333 [2024-12-07 02:49:15.348325] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 412018b3-3b00-4f6b-b888-70ab5e0388ba '!=' 412018b3-3b00-4f6b-b888-70ab5e0388ba ']' 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.333 [2024-12-07 02:49:15.376067] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.333 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.591 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.591 "name": "raid_bdev1", 00:16:04.591 "uuid": "412018b3-3b00-4f6b-b888-70ab5e0388ba", 00:16:04.591 "strip_size_kb": 0, 00:16:04.591 "state": "online", 00:16:04.591 "raid_level": "raid1", 00:16:04.591 "superblock": true, 00:16:04.591 "num_base_bdevs": 2, 00:16:04.591 "num_base_bdevs_discovered": 1, 00:16:04.591 "num_base_bdevs_operational": 1, 00:16:04.591 "base_bdevs_list": [ 00:16:04.591 { 00:16:04.591 "name": null, 00:16:04.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.591 "is_configured": false, 00:16:04.591 "data_offset": 0, 00:16:04.591 "data_size": 7936 00:16:04.591 }, 00:16:04.591 { 00:16:04.591 "name": "pt2", 00:16:04.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.591 "is_configured": true, 00:16:04.591 "data_offset": 256, 00:16:04.591 "data_size": 7936 00:16:04.591 } 00:16:04.591 ] 00:16:04.591 }' 00:16:04.591 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.591 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.852 [2024-12-07 02:49:15.839227] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:04.852 [2024-12-07 02:49:15.839259] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.852 [2024-12-07 02:49:15.839312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.852 [2024-12-07 02:49:15.839353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.852 [2024-12-07 02:49:15.839361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:04.852 [2024-12-07 02:49:15.915106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:04.852 [2024-12-07 02:49:15.915150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.852 [2024-12-07 02:49:15.915167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:04.852 [2024-12-07 02:49:15.915177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.852 [2024-12-07 02:49:15.917114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.852 [2024-12-07 02:49:15.917151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:04.852 [2024-12-07 02:49:15.917215] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:04.852 [2024-12-07 02:49:15.917242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:04.852 [2024-12-07 02:49:15.917303] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:04.852 [2024-12-07 02:49:15.917312] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:04.852 [2024-12-07 02:49:15.917505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:04.852 [2024-12-07 02:49:15.917643] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:04.852 [2024-12-07 02:49:15.917658] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:04.852 [2024-12-07 02:49:15.917750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.852 pt2 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.852 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.112 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.112 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.112 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.112 "name": "raid_bdev1", 00:16:05.112 "uuid": "412018b3-3b00-4f6b-b888-70ab5e0388ba", 00:16:05.112 "strip_size_kb": 0, 00:16:05.112 "state": "online", 00:16:05.112 "raid_level": "raid1", 00:16:05.112 "superblock": true, 00:16:05.112 "num_base_bdevs": 2, 00:16:05.112 "num_base_bdevs_discovered": 1, 00:16:05.112 "num_base_bdevs_operational": 1, 00:16:05.112 "base_bdevs_list": [ 00:16:05.112 { 00:16:05.112 "name": null, 00:16:05.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.112 "is_configured": false, 00:16:05.112 "data_offset": 256, 00:16:05.112 "data_size": 7936 00:16:05.112 }, 00:16:05.112 { 00:16:05.112 "name": "pt2", 00:16:05.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.112 "is_configured": true, 00:16:05.112 "data_offset": 256, 00:16:05.112 "data_size": 7936 00:16:05.112 } 00:16:05.112 ] 00:16:05.112 }' 00:16:05.112 02:49:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.112 02:49:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.372 [2024-12-07 02:49:16.382340] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.372 [2024-12-07 02:49:16.382369] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.372 [2024-12-07 02:49:16.382416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.372 [2024-12-07 02:49:16.382450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.372 [2024-12-07 02:49:16.382461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.372 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.372 [2024-12-07 02:49:16.446194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:05.372 [2024-12-07 02:49:16.446242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.373 [2024-12-07 02:49:16.446263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:05.373 [2024-12-07 02:49:16.446279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.373 [2024-12-07 02:49:16.448277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.373 [2024-12-07 02:49:16.448315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:05.373 [2024-12-07 02:49:16.448370] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:05.373 [2024-12-07 02:49:16.448407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:05.373 [2024-12-07 02:49:16.448514] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:05.373 [2024-12-07 02:49:16.448530] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.373 [2024-12-07 02:49:16.448553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:05.373 [2024-12-07 02:49:16.448619] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:05.373 [2024-12-07 02:49:16.448687] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:05.373 [2024-12-07 02:49:16.448707] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:05.373 [2024-12-07 02:49:16.448916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:05.373 [2024-12-07 02:49:16.449024] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:05.373 [2024-12-07 02:49:16.449034] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:05.373 [2024-12-07 02:49:16.449137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.632 pt1 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.632 "name": "raid_bdev1", 00:16:05.632 "uuid": "412018b3-3b00-4f6b-b888-70ab5e0388ba", 00:16:05.632 "strip_size_kb": 0, 00:16:05.632 "state": "online", 00:16:05.632 "raid_level": "raid1", 00:16:05.632 "superblock": true, 00:16:05.632 "num_base_bdevs": 2, 00:16:05.632 "num_base_bdevs_discovered": 1, 00:16:05.632 "num_base_bdevs_operational": 1, 00:16:05.632 "base_bdevs_list": [ 00:16:05.632 { 00:16:05.632 "name": null, 00:16:05.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.632 "is_configured": false, 00:16:05.632 "data_offset": 256, 00:16:05.632 "data_size": 7936 00:16:05.632 }, 00:16:05.632 { 00:16:05.632 "name": "pt2", 00:16:05.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.632 "is_configured": true, 00:16:05.632 "data_offset": 256, 00:16:05.632 "data_size": 7936 00:16:05.632 } 00:16:05.632 ] 00:16:05.632 }' 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.632 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:05.892 [2024-12-07 02:49:16.941597] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 412018b3-3b00-4f6b-b888-70ab5e0388ba '!=' 412018b3-3b00-4f6b-b888-70ab5e0388ba ']' 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96802 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96802 ']' 00:16:05.892 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96802 00:16:06.151 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:16:06.151 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:06.151 02:49:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96802 00:16:06.151 02:49:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:06.151 02:49:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:06.151 killing process with pid 96802 00:16:06.151 02:49:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96802' 00:16:06.151 02:49:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96802 00:16:06.151 [2024-12-07 02:49:17.009805] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.151 [2024-12-07 02:49:17.009867] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.151 [2024-12-07 02:49:17.009905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.151 [2024-12-07 02:49:17.009915] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:06.151 02:49:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96802 00:16:06.151 [2024-12-07 02:49:17.032851] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.411 02:49:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:06.411 00:16:06.411 real 0m4.888s 00:16:06.411 user 0m7.884s 00:16:06.411 sys 0m1.113s 00:16:06.411 02:49:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:06.411 02:49:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.411 ************************************ 00:16:06.411 END TEST raid_superblock_test_4k 00:16:06.411 ************************************ 00:16:06.411 02:49:17 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:06.411 02:49:17 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:06.411 02:49:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:06.411 02:49:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:06.411 02:49:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:06.411 ************************************ 00:16:06.411 START TEST raid_rebuild_test_sb_4k 00:16:06.411 ************************************ 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=97114 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 97114 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 97114 ']' 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:06.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:06.411 02:49:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:06.411 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:06.411 Zero copy mechanism will not be used. 00:16:06.411 [2024-12-07 02:49:17.472534] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:06.411 [2024-12-07 02:49:17.472689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97114 ] 00:16:06.671 [2024-12-07 02:49:17.635959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.672 [2024-12-07 02:49:17.682373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.672 [2024-12-07 02:49:17.726185] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.672 [2024-12-07 02:49:17.726229] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.242 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.242 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:16:07.242 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:07.242 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:07.242 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.242 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.242 BaseBdev1_malloc 00:16:07.242 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.242 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:07.242 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.502 [2024-12-07 02:49:18.325101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:07.502 [2024-12-07 02:49:18.325170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.502 [2024-12-07 02:49:18.325204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:07.502 [2024-12-07 02:49:18.325230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.502 [2024-12-07 02:49:18.327257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.502 [2024-12-07 02:49:18.327300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:07.502 BaseBdev1 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.502 BaseBdev2_malloc 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.502 [2024-12-07 02:49:18.369654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:07.502 [2024-12-07 02:49:18.369759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.502 [2024-12-07 02:49:18.369813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:07.502 [2024-12-07 02:49:18.369841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.502 [2024-12-07 02:49:18.374710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.502 [2024-12-07 02:49:18.374789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:07.502 BaseBdev2 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.502 spare_malloc 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.502 spare_delay 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.502 [2024-12-07 02:49:18.413333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:07.502 [2024-12-07 02:49:18.413387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.502 [2024-12-07 02:49:18.413410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:07.502 [2024-12-07 02:49:18.413420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.502 [2024-12-07 02:49:18.415406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.502 [2024-12-07 02:49:18.415445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:07.502 spare 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.502 [2024-12-07 02:49:18.425349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.502 [2024-12-07 02:49:18.427047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:07.502 [2024-12-07 02:49:18.427208] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:07.502 [2024-12-07 02:49:18.427231] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:07.502 [2024-12-07 02:49:18.427478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:07.502 [2024-12-07 02:49:18.427633] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:07.502 [2024-12-07 02:49:18.427668] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:07.502 [2024-12-07 02:49:18.427791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.502 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.503 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.503 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:07.503 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.503 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.503 "name": "raid_bdev1", 00:16:07.503 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:07.503 "strip_size_kb": 0, 00:16:07.503 "state": "online", 00:16:07.503 "raid_level": "raid1", 00:16:07.503 "superblock": true, 00:16:07.503 "num_base_bdevs": 2, 00:16:07.503 "num_base_bdevs_discovered": 2, 00:16:07.503 "num_base_bdevs_operational": 2, 00:16:07.503 "base_bdevs_list": [ 00:16:07.503 { 00:16:07.503 "name": "BaseBdev1", 00:16:07.503 "uuid": "a96bae01-33ed-5c4f-b72c-3108b3d96cd6", 00:16:07.503 "is_configured": true, 00:16:07.503 "data_offset": 256, 00:16:07.503 "data_size": 7936 00:16:07.503 }, 00:16:07.503 { 00:16:07.503 "name": "BaseBdev2", 00:16:07.503 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:07.503 "is_configured": true, 00:16:07.503 "data_offset": 256, 00:16:07.503 "data_size": 7936 00:16:07.503 } 00:16:07.503 ] 00:16:07.503 }' 00:16:07.503 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.503 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:08.073 [2024-12-07 02:49:18.856881] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:08.073 02:49:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:08.073 [2024-12-07 02:49:19.124165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:08.073 /dev/nbd0 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:08.333 1+0 records in 00:16:08.333 1+0 records out 00:16:08.333 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447273 s, 9.2 MB/s 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:08.333 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:08.905 7936+0 records in 00:16:08.905 7936+0 records out 00:16:08.905 32505856 bytes (33 MB, 31 MiB) copied, 0.642928 s, 50.6 MB/s 00:16:08.905 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:08.905 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:08.905 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:08.905 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:08.905 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:08.905 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.905 02:49:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:09.164 [2024-12-07 02:49:20.058649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.165 [2024-12-07 02:49:20.090736] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.165 "name": "raid_bdev1", 00:16:09.165 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:09.165 "strip_size_kb": 0, 00:16:09.165 "state": "online", 00:16:09.165 "raid_level": "raid1", 00:16:09.165 "superblock": true, 00:16:09.165 "num_base_bdevs": 2, 00:16:09.165 "num_base_bdevs_discovered": 1, 00:16:09.165 "num_base_bdevs_operational": 1, 00:16:09.165 "base_bdevs_list": [ 00:16:09.165 { 00:16:09.165 "name": null, 00:16:09.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.165 "is_configured": false, 00:16:09.165 "data_offset": 0, 00:16:09.165 "data_size": 7936 00:16:09.165 }, 00:16:09.165 { 00:16:09.165 "name": "BaseBdev2", 00:16:09.165 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:09.165 "is_configured": true, 00:16:09.165 "data_offset": 256, 00:16:09.165 "data_size": 7936 00:16:09.165 } 00:16:09.165 ] 00:16:09.165 }' 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.165 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.734 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:09.734 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.734 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:09.734 [2024-12-07 02:49:20.561938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.734 [2024-12-07 02:49:20.566367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:16:09.734 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.734 02:49:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:09.734 [2024-12-07 02:49:20.568286] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.687 "name": "raid_bdev1", 00:16:10.687 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:10.687 "strip_size_kb": 0, 00:16:10.687 "state": "online", 00:16:10.687 "raid_level": "raid1", 00:16:10.687 "superblock": true, 00:16:10.687 "num_base_bdevs": 2, 00:16:10.687 "num_base_bdevs_discovered": 2, 00:16:10.687 "num_base_bdevs_operational": 2, 00:16:10.687 "process": { 00:16:10.687 "type": "rebuild", 00:16:10.687 "target": "spare", 00:16:10.687 "progress": { 00:16:10.687 "blocks": 2560, 00:16:10.687 "percent": 32 00:16:10.687 } 00:16:10.687 }, 00:16:10.687 "base_bdevs_list": [ 00:16:10.687 { 00:16:10.687 "name": "spare", 00:16:10.687 "uuid": "6e5046be-254f-5393-ae9c-7fa5d1243e76", 00:16:10.687 "is_configured": true, 00:16:10.687 "data_offset": 256, 00:16:10.687 "data_size": 7936 00:16:10.687 }, 00:16:10.687 { 00:16:10.687 "name": "BaseBdev2", 00:16:10.687 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:10.687 "is_configured": true, 00:16:10.687 "data_offset": 256, 00:16:10.687 "data_size": 7936 00:16:10.687 } 00:16:10.687 ] 00:16:10.687 }' 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.687 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.687 [2024-12-07 02:49:21.733022] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.975 [2024-12-07 02:49:21.772923] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.975 [2024-12-07 02:49:21.772986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.975 [2024-12-07 02:49:21.773007] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.975 [2024-12-07 02:49:21.773016] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.975 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.975 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.975 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.975 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.975 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.975 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.976 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:10.976 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.976 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.976 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.976 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.976 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.976 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.976 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.976 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:10.976 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.976 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.976 "name": "raid_bdev1", 00:16:10.976 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:10.976 "strip_size_kb": 0, 00:16:10.976 "state": "online", 00:16:10.976 "raid_level": "raid1", 00:16:10.976 "superblock": true, 00:16:10.976 "num_base_bdevs": 2, 00:16:10.976 "num_base_bdevs_discovered": 1, 00:16:10.976 "num_base_bdevs_operational": 1, 00:16:10.976 "base_bdevs_list": [ 00:16:10.976 { 00:16:10.976 "name": null, 00:16:10.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.976 "is_configured": false, 00:16:10.976 "data_offset": 0, 00:16:10.976 "data_size": 7936 00:16:10.976 }, 00:16:10.976 { 00:16:10.976 "name": "BaseBdev2", 00:16:10.976 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:10.976 "is_configured": true, 00:16:10.976 "data_offset": 256, 00:16:10.976 "data_size": 7936 00:16:10.976 } 00:16:10.976 ] 00:16:10.976 }' 00:16:10.976 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.976 02:49:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.257 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.257 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.257 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.257 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.257 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.257 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.257 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.257 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.257 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.257 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.257 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.257 "name": "raid_bdev1", 00:16:11.257 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:11.257 "strip_size_kb": 0, 00:16:11.257 "state": "online", 00:16:11.257 "raid_level": "raid1", 00:16:11.257 "superblock": true, 00:16:11.257 "num_base_bdevs": 2, 00:16:11.257 "num_base_bdevs_discovered": 1, 00:16:11.257 "num_base_bdevs_operational": 1, 00:16:11.257 "base_bdevs_list": [ 00:16:11.257 { 00:16:11.257 "name": null, 00:16:11.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.257 "is_configured": false, 00:16:11.257 "data_offset": 0, 00:16:11.257 "data_size": 7936 00:16:11.257 }, 00:16:11.257 { 00:16:11.257 "name": "BaseBdev2", 00:16:11.257 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:11.257 "is_configured": true, 00:16:11.257 "data_offset": 256, 00:16:11.257 "data_size": 7936 00:16:11.257 } 00:16:11.257 ] 00:16:11.257 }' 00:16:11.257 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.530 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.530 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.530 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.530 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:11.530 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.530 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:11.530 [2024-12-07 02:49:22.388253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.530 [2024-12-07 02:49:22.392130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:16:11.530 [2024-12-07 02:49:22.394013] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.530 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.530 02:49:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.471 "name": "raid_bdev1", 00:16:12.471 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:12.471 "strip_size_kb": 0, 00:16:12.471 "state": "online", 00:16:12.471 "raid_level": "raid1", 00:16:12.471 "superblock": true, 00:16:12.471 "num_base_bdevs": 2, 00:16:12.471 "num_base_bdevs_discovered": 2, 00:16:12.471 "num_base_bdevs_operational": 2, 00:16:12.471 "process": { 00:16:12.471 "type": "rebuild", 00:16:12.471 "target": "spare", 00:16:12.471 "progress": { 00:16:12.471 "blocks": 2560, 00:16:12.471 "percent": 32 00:16:12.471 } 00:16:12.471 }, 00:16:12.471 "base_bdevs_list": [ 00:16:12.471 { 00:16:12.471 "name": "spare", 00:16:12.471 "uuid": "6e5046be-254f-5393-ae9c-7fa5d1243e76", 00:16:12.471 "is_configured": true, 00:16:12.471 "data_offset": 256, 00:16:12.471 "data_size": 7936 00:16:12.471 }, 00:16:12.471 { 00:16:12.471 "name": "BaseBdev2", 00:16:12.471 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:12.471 "is_configured": true, 00:16:12.471 "data_offset": 256, 00:16:12.471 "data_size": 7936 00:16:12.471 } 00:16:12.471 ] 00:16:12.471 }' 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:12.471 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=576 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.471 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:12.731 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.731 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.731 "name": "raid_bdev1", 00:16:12.731 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:12.731 "strip_size_kb": 0, 00:16:12.731 "state": "online", 00:16:12.731 "raid_level": "raid1", 00:16:12.731 "superblock": true, 00:16:12.731 "num_base_bdevs": 2, 00:16:12.731 "num_base_bdevs_discovered": 2, 00:16:12.731 "num_base_bdevs_operational": 2, 00:16:12.731 "process": { 00:16:12.731 "type": "rebuild", 00:16:12.731 "target": "spare", 00:16:12.731 "progress": { 00:16:12.731 "blocks": 2816, 00:16:12.731 "percent": 35 00:16:12.731 } 00:16:12.731 }, 00:16:12.731 "base_bdevs_list": [ 00:16:12.731 { 00:16:12.731 "name": "spare", 00:16:12.731 "uuid": "6e5046be-254f-5393-ae9c-7fa5d1243e76", 00:16:12.731 "is_configured": true, 00:16:12.731 "data_offset": 256, 00:16:12.731 "data_size": 7936 00:16:12.731 }, 00:16:12.731 { 00:16:12.731 "name": "BaseBdev2", 00:16:12.731 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:12.731 "is_configured": true, 00:16:12.731 "data_offset": 256, 00:16:12.731 "data_size": 7936 00:16:12.731 } 00:16:12.731 ] 00:16:12.731 }' 00:16:12.731 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.731 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.731 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.731 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.731 02:49:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.671 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.671 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.671 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.671 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.671 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.671 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.671 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.671 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.671 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.671 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:13.671 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.671 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.671 "name": "raid_bdev1", 00:16:13.671 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:13.671 "strip_size_kb": 0, 00:16:13.671 "state": "online", 00:16:13.671 "raid_level": "raid1", 00:16:13.671 "superblock": true, 00:16:13.671 "num_base_bdevs": 2, 00:16:13.671 "num_base_bdevs_discovered": 2, 00:16:13.671 "num_base_bdevs_operational": 2, 00:16:13.671 "process": { 00:16:13.671 "type": "rebuild", 00:16:13.671 "target": "spare", 00:16:13.671 "progress": { 00:16:13.671 "blocks": 5632, 00:16:13.671 "percent": 70 00:16:13.671 } 00:16:13.671 }, 00:16:13.671 "base_bdevs_list": [ 00:16:13.671 { 00:16:13.671 "name": "spare", 00:16:13.671 "uuid": "6e5046be-254f-5393-ae9c-7fa5d1243e76", 00:16:13.671 "is_configured": true, 00:16:13.671 "data_offset": 256, 00:16:13.671 "data_size": 7936 00:16:13.671 }, 00:16:13.671 { 00:16:13.671 "name": "BaseBdev2", 00:16:13.671 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:13.671 "is_configured": true, 00:16:13.671 "data_offset": 256, 00:16:13.671 "data_size": 7936 00:16:13.671 } 00:16:13.671 ] 00:16:13.671 }' 00:16:13.671 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.930 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.931 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.931 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.931 02:49:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.498 [2024-12-07 02:49:25.511548] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:14.498 [2024-12-07 02:49:25.511722] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:14.498 [2024-12-07 02:49:25.511844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.068 "name": "raid_bdev1", 00:16:15.068 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:15.068 "strip_size_kb": 0, 00:16:15.068 "state": "online", 00:16:15.068 "raid_level": "raid1", 00:16:15.068 "superblock": true, 00:16:15.068 "num_base_bdevs": 2, 00:16:15.068 "num_base_bdevs_discovered": 2, 00:16:15.068 "num_base_bdevs_operational": 2, 00:16:15.068 "base_bdevs_list": [ 00:16:15.068 { 00:16:15.068 "name": "spare", 00:16:15.068 "uuid": "6e5046be-254f-5393-ae9c-7fa5d1243e76", 00:16:15.068 "is_configured": true, 00:16:15.068 "data_offset": 256, 00:16:15.068 "data_size": 7936 00:16:15.068 }, 00:16:15.068 { 00:16:15.068 "name": "BaseBdev2", 00:16:15.068 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:15.068 "is_configured": true, 00:16:15.068 "data_offset": 256, 00:16:15.068 "data_size": 7936 00:16:15.068 } 00:16:15.068 ] 00:16:15.068 }' 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.068 02:49:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.068 "name": "raid_bdev1", 00:16:15.068 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:15.068 "strip_size_kb": 0, 00:16:15.068 "state": "online", 00:16:15.068 "raid_level": "raid1", 00:16:15.068 "superblock": true, 00:16:15.068 "num_base_bdevs": 2, 00:16:15.068 "num_base_bdevs_discovered": 2, 00:16:15.068 "num_base_bdevs_operational": 2, 00:16:15.068 "base_bdevs_list": [ 00:16:15.068 { 00:16:15.068 "name": "spare", 00:16:15.068 "uuid": "6e5046be-254f-5393-ae9c-7fa5d1243e76", 00:16:15.068 "is_configured": true, 00:16:15.068 "data_offset": 256, 00:16:15.068 "data_size": 7936 00:16:15.068 }, 00:16:15.068 { 00:16:15.068 "name": "BaseBdev2", 00:16:15.068 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:15.068 "is_configured": true, 00:16:15.068 "data_offset": 256, 00:16:15.068 "data_size": 7936 00:16:15.068 } 00:16:15.068 ] 00:16:15.068 }' 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.068 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.328 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.328 "name": "raid_bdev1", 00:16:15.328 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:15.328 "strip_size_kb": 0, 00:16:15.328 "state": "online", 00:16:15.328 "raid_level": "raid1", 00:16:15.328 "superblock": true, 00:16:15.328 "num_base_bdevs": 2, 00:16:15.328 "num_base_bdevs_discovered": 2, 00:16:15.328 "num_base_bdevs_operational": 2, 00:16:15.328 "base_bdevs_list": [ 00:16:15.328 { 00:16:15.328 "name": "spare", 00:16:15.328 "uuid": "6e5046be-254f-5393-ae9c-7fa5d1243e76", 00:16:15.328 "is_configured": true, 00:16:15.328 "data_offset": 256, 00:16:15.328 "data_size": 7936 00:16:15.328 }, 00:16:15.328 { 00:16:15.328 "name": "BaseBdev2", 00:16:15.328 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:15.328 "is_configured": true, 00:16:15.328 "data_offset": 256, 00:16:15.328 "data_size": 7936 00:16:15.328 } 00:16:15.328 ] 00:16:15.328 }' 00:16:15.328 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.328 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.587 [2024-12-07 02:49:26.548753] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:15.587 [2024-12-07 02:49:26.548836] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.587 [2024-12-07 02:49:26.548962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.587 [2024-12-07 02:49:26.549052] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.587 [2024-12-07 02:49:26.549107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.587 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:15.847 /dev/nbd0 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:15.847 1+0 records in 00:16:15.847 1+0 records out 00:16:15.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289578 s, 14.1 MB/s 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:15.847 02:49:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:16.107 /dev/nbd1 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.107 1+0 records in 00:16:16.107 1+0 records out 00:16:16.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227671 s, 18.0 MB/s 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.107 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:16.367 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:16.367 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:16.367 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:16.367 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.367 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.367 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:16.367 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:16.367 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.367 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:16.367 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:16.626 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:16.626 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:16.626 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:16.626 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:16.626 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:16.626 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:16.626 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:16.626 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:16.626 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:16.626 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:16.626 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.626 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.627 [2024-12-07 02:49:27.587856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:16.627 [2024-12-07 02:49:27.587914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.627 [2024-12-07 02:49:27.587935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:16.627 [2024-12-07 02:49:27.587950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.627 [2024-12-07 02:49:27.590465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.627 [2024-12-07 02:49:27.590561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:16.627 [2024-12-07 02:49:27.590667] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:16.627 [2024-12-07 02:49:27.590722] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:16.627 [2024-12-07 02:49:27.590853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.627 spare 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.627 [2024-12-07 02:49:27.690759] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:16.627 [2024-12-07 02:49:27.690783] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:16.627 [2024-12-07 02:49:27.691071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:16:16.627 [2024-12-07 02:49:27.691251] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:16.627 [2024-12-07 02:49:27.691266] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:16.627 [2024-12-07 02:49:27.691412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.627 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:16.886 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.886 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.886 "name": "raid_bdev1", 00:16:16.886 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:16.886 "strip_size_kb": 0, 00:16:16.886 "state": "online", 00:16:16.886 "raid_level": "raid1", 00:16:16.886 "superblock": true, 00:16:16.886 "num_base_bdevs": 2, 00:16:16.886 "num_base_bdevs_discovered": 2, 00:16:16.886 "num_base_bdevs_operational": 2, 00:16:16.886 "base_bdevs_list": [ 00:16:16.886 { 00:16:16.886 "name": "spare", 00:16:16.886 "uuid": "6e5046be-254f-5393-ae9c-7fa5d1243e76", 00:16:16.886 "is_configured": true, 00:16:16.886 "data_offset": 256, 00:16:16.886 "data_size": 7936 00:16:16.886 }, 00:16:16.886 { 00:16:16.886 "name": "BaseBdev2", 00:16:16.886 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:16.886 "is_configured": true, 00:16:16.886 "data_offset": 256, 00:16:16.886 "data_size": 7936 00:16:16.886 } 00:16:16.886 ] 00:16:16.886 }' 00:16:16.886 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.886 02:49:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.145 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:17.145 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.145 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:17.145 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:17.145 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.145 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.145 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.145 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.145 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.145 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.145 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.145 "name": "raid_bdev1", 00:16:17.145 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:17.145 "strip_size_kb": 0, 00:16:17.145 "state": "online", 00:16:17.145 "raid_level": "raid1", 00:16:17.145 "superblock": true, 00:16:17.145 "num_base_bdevs": 2, 00:16:17.145 "num_base_bdevs_discovered": 2, 00:16:17.145 "num_base_bdevs_operational": 2, 00:16:17.145 "base_bdevs_list": [ 00:16:17.145 { 00:16:17.145 "name": "spare", 00:16:17.145 "uuid": "6e5046be-254f-5393-ae9c-7fa5d1243e76", 00:16:17.145 "is_configured": true, 00:16:17.145 "data_offset": 256, 00:16:17.145 "data_size": 7936 00:16:17.145 }, 00:16:17.145 { 00:16:17.145 "name": "BaseBdev2", 00:16:17.145 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:17.145 "is_configured": true, 00:16:17.145 "data_offset": 256, 00:16:17.145 "data_size": 7936 00:16:17.145 } 00:16:17.145 ] 00:16:17.145 }' 00:16:17.145 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.403 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:17.403 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.403 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:17.403 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:17.403 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.403 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.403 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.403 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.403 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.403 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.404 [2024-12-07 02:49:28.346718] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.404 "name": "raid_bdev1", 00:16:17.404 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:17.404 "strip_size_kb": 0, 00:16:17.404 "state": "online", 00:16:17.404 "raid_level": "raid1", 00:16:17.404 "superblock": true, 00:16:17.404 "num_base_bdevs": 2, 00:16:17.404 "num_base_bdevs_discovered": 1, 00:16:17.404 "num_base_bdevs_operational": 1, 00:16:17.404 "base_bdevs_list": [ 00:16:17.404 { 00:16:17.404 "name": null, 00:16:17.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.404 "is_configured": false, 00:16:17.404 "data_offset": 0, 00:16:17.404 "data_size": 7936 00:16:17.404 }, 00:16:17.404 { 00:16:17.404 "name": "BaseBdev2", 00:16:17.404 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:17.404 "is_configured": true, 00:16:17.404 "data_offset": 256, 00:16:17.404 "data_size": 7936 00:16:17.404 } 00:16:17.404 ] 00:16:17.404 }' 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.404 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.973 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:17.973 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.973 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:17.973 [2024-12-07 02:49:28.837880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.973 [2024-12-07 02:49:28.838110] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:17.973 [2024-12-07 02:49:28.838171] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:17.973 [2024-12-07 02:49:28.838233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.973 [2024-12-07 02:49:28.845308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:16:17.973 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.973 02:49:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:17.973 [2024-12-07 02:49:28.847525] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.912 "name": "raid_bdev1", 00:16:18.912 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:18.912 "strip_size_kb": 0, 00:16:18.912 "state": "online", 00:16:18.912 "raid_level": "raid1", 00:16:18.912 "superblock": true, 00:16:18.912 "num_base_bdevs": 2, 00:16:18.912 "num_base_bdevs_discovered": 2, 00:16:18.912 "num_base_bdevs_operational": 2, 00:16:18.912 "process": { 00:16:18.912 "type": "rebuild", 00:16:18.912 "target": "spare", 00:16:18.912 "progress": { 00:16:18.912 "blocks": 2560, 00:16:18.912 "percent": 32 00:16:18.912 } 00:16:18.912 }, 00:16:18.912 "base_bdevs_list": [ 00:16:18.912 { 00:16:18.912 "name": "spare", 00:16:18.912 "uuid": "6e5046be-254f-5393-ae9c-7fa5d1243e76", 00:16:18.912 "is_configured": true, 00:16:18.912 "data_offset": 256, 00:16:18.912 "data_size": 7936 00:16:18.912 }, 00:16:18.912 { 00:16:18.912 "name": "BaseBdev2", 00:16:18.912 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:18.912 "is_configured": true, 00:16:18.912 "data_offset": 256, 00:16:18.912 "data_size": 7936 00:16:18.912 } 00:16:18.912 ] 00:16:18.912 }' 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.912 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.172 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.172 02:49:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.172 [2024-12-07 02:49:30.007939] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.172 [2024-12-07 02:49:30.055157] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:19.172 [2024-12-07 02:49:30.055210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.172 [2024-12-07 02:49:30.055228] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:19.172 [2024-12-07 02:49:30.055236] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.172 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.172 "name": "raid_bdev1", 00:16:19.172 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:19.172 "strip_size_kb": 0, 00:16:19.172 "state": "online", 00:16:19.172 "raid_level": "raid1", 00:16:19.172 "superblock": true, 00:16:19.172 "num_base_bdevs": 2, 00:16:19.172 "num_base_bdevs_discovered": 1, 00:16:19.172 "num_base_bdevs_operational": 1, 00:16:19.172 "base_bdevs_list": [ 00:16:19.172 { 00:16:19.172 "name": null, 00:16:19.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.172 "is_configured": false, 00:16:19.172 "data_offset": 0, 00:16:19.172 "data_size": 7936 00:16:19.172 }, 00:16:19.172 { 00:16:19.172 "name": "BaseBdev2", 00:16:19.173 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:19.173 "is_configured": true, 00:16:19.173 "data_offset": 256, 00:16:19.173 "data_size": 7936 00:16:19.173 } 00:16:19.173 ] 00:16:19.173 }' 00:16:19.173 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.173 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.743 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:19.743 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.743 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:19.743 [2024-12-07 02:49:30.541659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:19.743 [2024-12-07 02:49:30.541760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.743 [2024-12-07 02:49:30.541802] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:19.743 [2024-12-07 02:49:30.541830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.743 [2024-12-07 02:49:30.542355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.743 [2024-12-07 02:49:30.542413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:19.743 [2024-12-07 02:49:30.542530] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:19.743 [2024-12-07 02:49:30.542568] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:19.743 [2024-12-07 02:49:30.542635] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:19.743 [2024-12-07 02:49:30.542706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.743 [2024-12-07 02:49:30.548689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:19.743 spare 00:16:19.743 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.743 02:49:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:19.743 [2024-12-07 02:49:30.550738] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.687 "name": "raid_bdev1", 00:16:20.687 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:20.687 "strip_size_kb": 0, 00:16:20.687 "state": "online", 00:16:20.687 "raid_level": "raid1", 00:16:20.687 "superblock": true, 00:16:20.687 "num_base_bdevs": 2, 00:16:20.687 "num_base_bdevs_discovered": 2, 00:16:20.687 "num_base_bdevs_operational": 2, 00:16:20.687 "process": { 00:16:20.687 "type": "rebuild", 00:16:20.687 "target": "spare", 00:16:20.687 "progress": { 00:16:20.687 "blocks": 2560, 00:16:20.687 "percent": 32 00:16:20.687 } 00:16:20.687 }, 00:16:20.687 "base_bdevs_list": [ 00:16:20.687 { 00:16:20.687 "name": "spare", 00:16:20.687 "uuid": "6e5046be-254f-5393-ae9c-7fa5d1243e76", 00:16:20.687 "is_configured": true, 00:16:20.687 "data_offset": 256, 00:16:20.687 "data_size": 7936 00:16:20.687 }, 00:16:20.687 { 00:16:20.687 "name": "BaseBdev2", 00:16:20.687 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:20.687 "is_configured": true, 00:16:20.687 "data_offset": 256, 00:16:20.687 "data_size": 7936 00:16:20.687 } 00:16:20.687 ] 00:16:20.687 }' 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.687 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.687 [2024-12-07 02:49:31.702735] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.687 [2024-12-07 02:49:31.758405] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:20.687 [2024-12-07 02:49:31.758469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.687 [2024-12-07 02:49:31.758484] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.687 [2024-12-07 02:49:31.758494] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.946 "name": "raid_bdev1", 00:16:20.946 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:20.946 "strip_size_kb": 0, 00:16:20.946 "state": "online", 00:16:20.946 "raid_level": "raid1", 00:16:20.946 "superblock": true, 00:16:20.946 "num_base_bdevs": 2, 00:16:20.946 "num_base_bdevs_discovered": 1, 00:16:20.946 "num_base_bdevs_operational": 1, 00:16:20.946 "base_bdevs_list": [ 00:16:20.946 { 00:16:20.946 "name": null, 00:16:20.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.946 "is_configured": false, 00:16:20.946 "data_offset": 0, 00:16:20.946 "data_size": 7936 00:16:20.946 }, 00:16:20.946 { 00:16:20.946 "name": "BaseBdev2", 00:16:20.946 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:20.946 "is_configured": true, 00:16:20.946 "data_offset": 256, 00:16:20.946 "data_size": 7936 00:16:20.946 } 00:16:20.946 ] 00:16:20.946 }' 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.946 02:49:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.205 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:21.205 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.205 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:21.205 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:21.205 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.205 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.205 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.205 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.205 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.205 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.205 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.205 "name": "raid_bdev1", 00:16:21.205 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:21.205 "strip_size_kb": 0, 00:16:21.205 "state": "online", 00:16:21.205 "raid_level": "raid1", 00:16:21.205 "superblock": true, 00:16:21.205 "num_base_bdevs": 2, 00:16:21.205 "num_base_bdevs_discovered": 1, 00:16:21.205 "num_base_bdevs_operational": 1, 00:16:21.205 "base_bdevs_list": [ 00:16:21.205 { 00:16:21.205 "name": null, 00:16:21.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.205 "is_configured": false, 00:16:21.205 "data_offset": 0, 00:16:21.205 "data_size": 7936 00:16:21.205 }, 00:16:21.205 { 00:16:21.205 "name": "BaseBdev2", 00:16:21.205 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:21.205 "is_configured": true, 00:16:21.205 "data_offset": 256, 00:16:21.205 "data_size": 7936 00:16:21.205 } 00:16:21.205 ] 00:16:21.205 }' 00:16:21.205 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.463 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.463 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.463 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.463 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:21.463 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.463 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.463 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.463 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:21.463 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.463 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:21.463 [2024-12-07 02:49:32.392344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:21.463 [2024-12-07 02:49:32.392402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.463 [2024-12-07 02:49:32.392423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:21.463 [2024-12-07 02:49:32.392434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.463 [2024-12-07 02:49:32.392889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.463 [2024-12-07 02:49:32.392910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:21.463 [2024-12-07 02:49:32.392981] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:21.463 [2024-12-07 02:49:32.393002] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:21.463 [2024-12-07 02:49:32.393012] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:21.463 [2024-12-07 02:49:32.393028] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:21.463 BaseBdev1 00:16:21.463 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.463 02:49:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:22.400 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.401 "name": "raid_bdev1", 00:16:22.401 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:22.401 "strip_size_kb": 0, 00:16:22.401 "state": "online", 00:16:22.401 "raid_level": "raid1", 00:16:22.401 "superblock": true, 00:16:22.401 "num_base_bdevs": 2, 00:16:22.401 "num_base_bdevs_discovered": 1, 00:16:22.401 "num_base_bdevs_operational": 1, 00:16:22.401 "base_bdevs_list": [ 00:16:22.401 { 00:16:22.401 "name": null, 00:16:22.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.401 "is_configured": false, 00:16:22.401 "data_offset": 0, 00:16:22.401 "data_size": 7936 00:16:22.401 }, 00:16:22.401 { 00:16:22.401 "name": "BaseBdev2", 00:16:22.401 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:22.401 "is_configured": true, 00:16:22.401 "data_offset": 256, 00:16:22.401 "data_size": 7936 00:16:22.401 } 00:16:22.401 ] 00:16:22.401 }' 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.401 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:22.779 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.779 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.779 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.779 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.779 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.038 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.038 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.038 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.038 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.038 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.038 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.038 "name": "raid_bdev1", 00:16:23.038 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:23.038 "strip_size_kb": 0, 00:16:23.038 "state": "online", 00:16:23.038 "raid_level": "raid1", 00:16:23.038 "superblock": true, 00:16:23.038 "num_base_bdevs": 2, 00:16:23.038 "num_base_bdevs_discovered": 1, 00:16:23.038 "num_base_bdevs_operational": 1, 00:16:23.038 "base_bdevs_list": [ 00:16:23.038 { 00:16:23.038 "name": null, 00:16:23.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.038 "is_configured": false, 00:16:23.038 "data_offset": 0, 00:16:23.038 "data_size": 7936 00:16:23.038 }, 00:16:23.038 { 00:16:23.038 "name": "BaseBdev2", 00:16:23.038 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:23.038 "is_configured": true, 00:16:23.038 "data_offset": 256, 00:16:23.038 "data_size": 7936 00:16:23.038 } 00:16:23.038 ] 00:16:23.038 }' 00:16:23.038 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.038 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:23.038 02:49:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.038 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:23.038 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:23.038 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:16:23.038 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:23.038 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:23.039 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.039 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:23.039 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:23.039 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:23.039 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.039 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:23.039 [2024-12-07 02:49:34.017638] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.039 [2024-12-07 02:49:34.017839] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:23.039 [2024-12-07 02:49:34.017852] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:23.039 request: 00:16:23.039 { 00:16:23.039 "base_bdev": "BaseBdev1", 00:16:23.039 "raid_bdev": "raid_bdev1", 00:16:23.039 "method": "bdev_raid_add_base_bdev", 00:16:23.039 "req_id": 1 00:16:23.039 } 00:16:23.039 Got JSON-RPC error response 00:16:23.039 response: 00:16:23.039 { 00:16:23.039 "code": -22, 00:16:23.039 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:23.039 } 00:16:23.039 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:23.039 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:16:23.039 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:23.039 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:23.039 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:23.039 02:49:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.977 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:24.237 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.237 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.237 "name": "raid_bdev1", 00:16:24.237 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:24.237 "strip_size_kb": 0, 00:16:24.237 "state": "online", 00:16:24.237 "raid_level": "raid1", 00:16:24.237 "superblock": true, 00:16:24.237 "num_base_bdevs": 2, 00:16:24.237 "num_base_bdevs_discovered": 1, 00:16:24.237 "num_base_bdevs_operational": 1, 00:16:24.237 "base_bdevs_list": [ 00:16:24.237 { 00:16:24.237 "name": null, 00:16:24.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.237 "is_configured": false, 00:16:24.237 "data_offset": 0, 00:16:24.237 "data_size": 7936 00:16:24.237 }, 00:16:24.237 { 00:16:24.237 "name": "BaseBdev2", 00:16:24.237 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:24.237 "is_configured": true, 00:16:24.237 "data_offset": 256, 00:16:24.237 "data_size": 7936 00:16:24.237 } 00:16:24.237 ] 00:16:24.237 }' 00:16:24.237 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.237 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.495 "name": "raid_bdev1", 00:16:24.495 "uuid": "2325d827-5e9a-40e8-89b3-afe125b1c959", 00:16:24.495 "strip_size_kb": 0, 00:16:24.495 "state": "online", 00:16:24.495 "raid_level": "raid1", 00:16:24.495 "superblock": true, 00:16:24.495 "num_base_bdevs": 2, 00:16:24.495 "num_base_bdevs_discovered": 1, 00:16:24.495 "num_base_bdevs_operational": 1, 00:16:24.495 "base_bdevs_list": [ 00:16:24.495 { 00:16:24.495 "name": null, 00:16:24.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.495 "is_configured": false, 00:16:24.495 "data_offset": 0, 00:16:24.495 "data_size": 7936 00:16:24.495 }, 00:16:24.495 { 00:16:24.495 "name": "BaseBdev2", 00:16:24.495 "uuid": "8116e208-df18-5539-ba83-c96c14f6d3ec", 00:16:24.495 "is_configured": true, 00:16:24.495 "data_offset": 256, 00:16:24.495 "data_size": 7936 00:16:24.495 } 00:16:24.495 ] 00:16:24.495 }' 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:24.495 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.755 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:24.755 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 97114 00:16:24.755 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 97114 ']' 00:16:24.755 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 97114 00:16:24.755 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:16:24.755 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:24.755 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97114 00:16:24.755 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:24.755 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:24.755 killing process with pid 97114 00:16:24.755 Received shutdown signal, test time was about 60.000000 seconds 00:16:24.755 00:16:24.755 Latency(us) 00:16:24.755 [2024-12-07T02:49:35.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.755 [2024-12-07T02:49:35.833Z] =================================================================================================================== 00:16:24.755 [2024-12-07T02:49:35.833Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:24.755 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97114' 00:16:24.755 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 97114 00:16:24.755 [2024-12-07 02:49:35.654899] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.755 [2024-12-07 02:49:35.655040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.755 02:49:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 97114 00:16:24.755 [2024-12-07 02:49:35.655099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.755 [2024-12-07 02:49:35.655109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:24.755 [2024-12-07 02:49:35.711797] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.014 02:49:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:25.014 00:16:25.014 real 0m18.705s 00:16:25.014 user 0m24.782s 00:16:25.014 sys 0m2.700s 00:16:25.014 ************************************ 00:16:25.014 END TEST raid_rebuild_test_sb_4k 00:16:25.014 ************************************ 00:16:25.014 02:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.014 02:49:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:25.274 02:49:36 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:25.274 02:49:36 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:25.274 02:49:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:25.274 02:49:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.274 02:49:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.274 ************************************ 00:16:25.274 START TEST raid_state_function_test_sb_md_separate 00:16:25.274 ************************************ 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:25.274 Process raid pid: 97798 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97798 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97798' 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97798 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97798 ']' 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:25.274 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.275 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:25.275 02:49:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:25.275 [2024-12-07 02:49:36.251827] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:25.275 [2024-12-07 02:49:36.252033] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.534 [2024-12-07 02:49:36.414167] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.534 [2024-12-07 02:49:36.487338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.534 [2024-12-07 02:49:36.565515] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.534 [2024-12-07 02:49:36.565650] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.105 [2024-12-07 02:49:37.074731] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.105 [2024-12-07 02:49:37.074785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.105 [2024-12-07 02:49:37.074797] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.105 [2024-12-07 02:49:37.074807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.105 "name": "Existed_Raid", 00:16:26.105 "uuid": "de3b0782-4141-4241-a7ba-4ac93c6b9a8c", 00:16:26.105 "strip_size_kb": 0, 00:16:26.105 "state": "configuring", 00:16:26.105 "raid_level": "raid1", 00:16:26.105 "superblock": true, 00:16:26.105 "num_base_bdevs": 2, 00:16:26.105 "num_base_bdevs_discovered": 0, 00:16:26.105 "num_base_bdevs_operational": 2, 00:16:26.105 "base_bdevs_list": [ 00:16:26.105 { 00:16:26.105 "name": "BaseBdev1", 00:16:26.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.105 "is_configured": false, 00:16:26.105 "data_offset": 0, 00:16:26.105 "data_size": 0 00:16:26.105 }, 00:16:26.105 { 00:16:26.105 "name": "BaseBdev2", 00:16:26.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.105 "is_configured": false, 00:16:26.105 "data_offset": 0, 00:16:26.105 "data_size": 0 00:16:26.105 } 00:16:26.105 ] 00:16:26.105 }' 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.105 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.676 [2024-12-07 02:49:37.577734] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.676 [2024-12-07 02:49:37.577837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.676 [2024-12-07 02:49:37.589753] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:26.676 [2024-12-07 02:49:37.589832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:26.676 [2024-12-07 02:49:37.589857] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.676 [2024-12-07 02:49:37.589879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.676 [2024-12-07 02:49:37.618165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.676 BaseBdev1 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.676 [ 00:16:26.676 { 00:16:26.676 "name": "BaseBdev1", 00:16:26.676 "aliases": [ 00:16:26.676 "52c292ef-5339-46d5-8b26-7b4def0c34d6" 00:16:26.676 ], 00:16:26.676 "product_name": "Malloc disk", 00:16:26.676 "block_size": 4096, 00:16:26.676 "num_blocks": 8192, 00:16:26.676 "uuid": "52c292ef-5339-46d5-8b26-7b4def0c34d6", 00:16:26.676 "md_size": 32, 00:16:26.676 "md_interleave": false, 00:16:26.676 "dif_type": 0, 00:16:26.676 "assigned_rate_limits": { 00:16:26.676 "rw_ios_per_sec": 0, 00:16:26.676 "rw_mbytes_per_sec": 0, 00:16:26.676 "r_mbytes_per_sec": 0, 00:16:26.676 "w_mbytes_per_sec": 0 00:16:26.676 }, 00:16:26.676 "claimed": true, 00:16:26.676 "claim_type": "exclusive_write", 00:16:26.676 "zoned": false, 00:16:26.676 "supported_io_types": { 00:16:26.676 "read": true, 00:16:26.676 "write": true, 00:16:26.676 "unmap": true, 00:16:26.676 "flush": true, 00:16:26.676 "reset": true, 00:16:26.676 "nvme_admin": false, 00:16:26.676 "nvme_io": false, 00:16:26.676 "nvme_io_md": false, 00:16:26.676 "write_zeroes": true, 00:16:26.676 "zcopy": true, 00:16:26.676 "get_zone_info": false, 00:16:26.676 "zone_management": false, 00:16:26.676 "zone_append": false, 00:16:26.676 "compare": false, 00:16:26.676 "compare_and_write": false, 00:16:26.676 "abort": true, 00:16:26.676 "seek_hole": false, 00:16:26.676 "seek_data": false, 00:16:26.676 "copy": true, 00:16:26.676 "nvme_iov_md": false 00:16:26.676 }, 00:16:26.676 "memory_domains": [ 00:16:26.676 { 00:16:26.676 "dma_device_id": "system", 00:16:26.676 "dma_device_type": 1 00:16:26.676 }, 00:16:26.676 { 00:16:26.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.676 "dma_device_type": 2 00:16:26.676 } 00:16:26.676 ], 00:16:26.676 "driver_specific": {} 00:16:26.676 } 00:16:26.676 ] 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:26.676 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.677 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.677 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.677 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.677 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.677 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.677 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.677 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:26.677 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.677 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.677 "name": "Existed_Raid", 00:16:26.677 "uuid": "0a687243-2a7c-45ea-b2ed-76762d862eb7", 00:16:26.677 "strip_size_kb": 0, 00:16:26.677 "state": "configuring", 00:16:26.677 "raid_level": "raid1", 00:16:26.677 "superblock": true, 00:16:26.677 "num_base_bdevs": 2, 00:16:26.677 "num_base_bdevs_discovered": 1, 00:16:26.677 "num_base_bdevs_operational": 2, 00:16:26.677 "base_bdevs_list": [ 00:16:26.677 { 00:16:26.677 "name": "BaseBdev1", 00:16:26.677 "uuid": "52c292ef-5339-46d5-8b26-7b4def0c34d6", 00:16:26.677 "is_configured": true, 00:16:26.677 "data_offset": 256, 00:16:26.677 "data_size": 7936 00:16:26.677 }, 00:16:26.677 { 00:16:26.677 "name": "BaseBdev2", 00:16:26.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.677 "is_configured": false, 00:16:26.677 "data_offset": 0, 00:16:26.677 "data_size": 0 00:16:26.677 } 00:16:26.677 ] 00:16:26.677 }' 00:16:26.677 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.677 02:49:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.247 [2024-12-07 02:49:38.081417] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:27.247 [2024-12-07 02:49:38.081461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.247 [2024-12-07 02:49:38.093473] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:27.247 [2024-12-07 02:49:38.095652] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:27.247 [2024-12-07 02:49:38.095742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.247 "name": "Existed_Raid", 00:16:27.247 "uuid": "a616caa8-1ef2-48a4-8755-222627090722", 00:16:27.247 "strip_size_kb": 0, 00:16:27.247 "state": "configuring", 00:16:27.247 "raid_level": "raid1", 00:16:27.247 "superblock": true, 00:16:27.247 "num_base_bdevs": 2, 00:16:27.247 "num_base_bdevs_discovered": 1, 00:16:27.247 "num_base_bdevs_operational": 2, 00:16:27.247 "base_bdevs_list": [ 00:16:27.247 { 00:16:27.247 "name": "BaseBdev1", 00:16:27.247 "uuid": "52c292ef-5339-46d5-8b26-7b4def0c34d6", 00:16:27.247 "is_configured": true, 00:16:27.247 "data_offset": 256, 00:16:27.247 "data_size": 7936 00:16:27.247 }, 00:16:27.247 { 00:16:27.247 "name": "BaseBdev2", 00:16:27.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.247 "is_configured": false, 00:16:27.247 "data_offset": 0, 00:16:27.247 "data_size": 0 00:16:27.247 } 00:16:27.247 ] 00:16:27.247 }' 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.247 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.508 [2024-12-07 02:49:38.520985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.508 [2024-12-07 02:49:38.521202] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:27.508 [2024-12-07 02:49:38.521220] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:27.508 [2024-12-07 02:49:38.521342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:27.508 [2024-12-07 02:49:38.521476] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:27.508 [2024-12-07 02:49:38.521494] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:27.508 [2024-12-07 02:49:38.521621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.508 BaseBdev2 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.508 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.508 [ 00:16:27.508 { 00:16:27.508 "name": "BaseBdev2", 00:16:27.508 "aliases": [ 00:16:27.508 "6fa25cc1-3739-492b-9a93-e427d4ddd683" 00:16:27.508 ], 00:16:27.508 "product_name": "Malloc disk", 00:16:27.508 "block_size": 4096, 00:16:27.508 "num_blocks": 8192, 00:16:27.508 "uuid": "6fa25cc1-3739-492b-9a93-e427d4ddd683", 00:16:27.508 "md_size": 32, 00:16:27.508 "md_interleave": false, 00:16:27.508 "dif_type": 0, 00:16:27.508 "assigned_rate_limits": { 00:16:27.508 "rw_ios_per_sec": 0, 00:16:27.508 "rw_mbytes_per_sec": 0, 00:16:27.508 "r_mbytes_per_sec": 0, 00:16:27.508 "w_mbytes_per_sec": 0 00:16:27.508 }, 00:16:27.508 "claimed": true, 00:16:27.508 "claim_type": "exclusive_write", 00:16:27.508 "zoned": false, 00:16:27.508 "supported_io_types": { 00:16:27.508 "read": true, 00:16:27.508 "write": true, 00:16:27.508 "unmap": true, 00:16:27.508 "flush": true, 00:16:27.508 "reset": true, 00:16:27.508 "nvme_admin": false, 00:16:27.508 "nvme_io": false, 00:16:27.508 "nvme_io_md": false, 00:16:27.508 "write_zeroes": true, 00:16:27.508 "zcopy": true, 00:16:27.508 "get_zone_info": false, 00:16:27.508 "zone_management": false, 00:16:27.508 "zone_append": false, 00:16:27.508 "compare": false, 00:16:27.508 "compare_and_write": false, 00:16:27.508 "abort": true, 00:16:27.509 "seek_hole": false, 00:16:27.509 "seek_data": false, 00:16:27.509 "copy": true, 00:16:27.509 "nvme_iov_md": false 00:16:27.509 }, 00:16:27.509 "memory_domains": [ 00:16:27.509 { 00:16:27.509 "dma_device_id": "system", 00:16:27.509 "dma_device_type": 1 00:16:27.509 }, 00:16:27.509 { 00:16:27.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.509 "dma_device_type": 2 00:16:27.509 } 00:16:27.509 ], 00:16:27.509 "driver_specific": {} 00:16:27.509 } 00:16:27.509 ] 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:27.509 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.768 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.768 "name": "Existed_Raid", 00:16:27.768 "uuid": "a616caa8-1ef2-48a4-8755-222627090722", 00:16:27.768 "strip_size_kb": 0, 00:16:27.768 "state": "online", 00:16:27.768 "raid_level": "raid1", 00:16:27.768 "superblock": true, 00:16:27.768 "num_base_bdevs": 2, 00:16:27.768 "num_base_bdevs_discovered": 2, 00:16:27.768 "num_base_bdevs_operational": 2, 00:16:27.768 "base_bdevs_list": [ 00:16:27.768 { 00:16:27.768 "name": "BaseBdev1", 00:16:27.768 "uuid": "52c292ef-5339-46d5-8b26-7b4def0c34d6", 00:16:27.768 "is_configured": true, 00:16:27.768 "data_offset": 256, 00:16:27.768 "data_size": 7936 00:16:27.768 }, 00:16:27.768 { 00:16:27.768 "name": "BaseBdev2", 00:16:27.768 "uuid": "6fa25cc1-3739-492b-9a93-e427d4ddd683", 00:16:27.768 "is_configured": true, 00:16:27.768 "data_offset": 256, 00:16:27.768 "data_size": 7936 00:16:27.768 } 00:16:27.768 ] 00:16:27.768 }' 00:16:27.768 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.768 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.029 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:28.029 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:28.029 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:28.029 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:28.029 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:28.029 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:28.029 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:28.029 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:28.029 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.029 02:49:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.029 [2024-12-07 02:49:38.988471] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.029 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.029 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:28.029 "name": "Existed_Raid", 00:16:28.029 "aliases": [ 00:16:28.029 "a616caa8-1ef2-48a4-8755-222627090722" 00:16:28.029 ], 00:16:28.029 "product_name": "Raid Volume", 00:16:28.029 "block_size": 4096, 00:16:28.029 "num_blocks": 7936, 00:16:28.029 "uuid": "a616caa8-1ef2-48a4-8755-222627090722", 00:16:28.029 "md_size": 32, 00:16:28.029 "md_interleave": false, 00:16:28.029 "dif_type": 0, 00:16:28.029 "assigned_rate_limits": { 00:16:28.029 "rw_ios_per_sec": 0, 00:16:28.029 "rw_mbytes_per_sec": 0, 00:16:28.029 "r_mbytes_per_sec": 0, 00:16:28.029 "w_mbytes_per_sec": 0 00:16:28.029 }, 00:16:28.029 "claimed": false, 00:16:28.029 "zoned": false, 00:16:28.029 "supported_io_types": { 00:16:28.029 "read": true, 00:16:28.029 "write": true, 00:16:28.029 "unmap": false, 00:16:28.029 "flush": false, 00:16:28.029 "reset": true, 00:16:28.029 "nvme_admin": false, 00:16:28.029 "nvme_io": false, 00:16:28.029 "nvme_io_md": false, 00:16:28.029 "write_zeroes": true, 00:16:28.029 "zcopy": false, 00:16:28.029 "get_zone_info": false, 00:16:28.029 "zone_management": false, 00:16:28.029 "zone_append": false, 00:16:28.029 "compare": false, 00:16:28.029 "compare_and_write": false, 00:16:28.029 "abort": false, 00:16:28.029 "seek_hole": false, 00:16:28.029 "seek_data": false, 00:16:28.029 "copy": false, 00:16:28.029 "nvme_iov_md": false 00:16:28.029 }, 00:16:28.029 "memory_domains": [ 00:16:28.029 { 00:16:28.029 "dma_device_id": "system", 00:16:28.029 "dma_device_type": 1 00:16:28.029 }, 00:16:28.029 { 00:16:28.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.029 "dma_device_type": 2 00:16:28.029 }, 00:16:28.029 { 00:16:28.029 "dma_device_id": "system", 00:16:28.029 "dma_device_type": 1 00:16:28.029 }, 00:16:28.029 { 00:16:28.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.029 "dma_device_type": 2 00:16:28.029 } 00:16:28.029 ], 00:16:28.029 "driver_specific": { 00:16:28.029 "raid": { 00:16:28.029 "uuid": "a616caa8-1ef2-48a4-8755-222627090722", 00:16:28.029 "strip_size_kb": 0, 00:16:28.029 "state": "online", 00:16:28.029 "raid_level": "raid1", 00:16:28.029 "superblock": true, 00:16:28.029 "num_base_bdevs": 2, 00:16:28.029 "num_base_bdevs_discovered": 2, 00:16:28.029 "num_base_bdevs_operational": 2, 00:16:28.029 "base_bdevs_list": [ 00:16:28.029 { 00:16:28.029 "name": "BaseBdev1", 00:16:28.029 "uuid": "52c292ef-5339-46d5-8b26-7b4def0c34d6", 00:16:28.029 "is_configured": true, 00:16:28.029 "data_offset": 256, 00:16:28.029 "data_size": 7936 00:16:28.029 }, 00:16:28.029 { 00:16:28.029 "name": "BaseBdev2", 00:16:28.029 "uuid": "6fa25cc1-3739-492b-9a93-e427d4ddd683", 00:16:28.029 "is_configured": true, 00:16:28.029 "data_offset": 256, 00:16:28.029 "data_size": 7936 00:16:28.029 } 00:16:28.029 ] 00:16:28.029 } 00:16:28.029 } 00:16:28.029 }' 00:16:28.029 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:28.029 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:28.029 BaseBdev2' 00:16:28.029 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.029 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:28.029 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.290 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.291 [2024-12-07 02:49:39.208068] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.291 "name": "Existed_Raid", 00:16:28.291 "uuid": "a616caa8-1ef2-48a4-8755-222627090722", 00:16:28.291 "strip_size_kb": 0, 00:16:28.291 "state": "online", 00:16:28.291 "raid_level": "raid1", 00:16:28.291 "superblock": true, 00:16:28.291 "num_base_bdevs": 2, 00:16:28.291 "num_base_bdevs_discovered": 1, 00:16:28.291 "num_base_bdevs_operational": 1, 00:16:28.291 "base_bdevs_list": [ 00:16:28.291 { 00:16:28.291 "name": null, 00:16:28.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.291 "is_configured": false, 00:16:28.291 "data_offset": 0, 00:16:28.291 "data_size": 7936 00:16:28.291 }, 00:16:28.291 { 00:16:28.291 "name": "BaseBdev2", 00:16:28.291 "uuid": "6fa25cc1-3739-492b-9a93-e427d4ddd683", 00:16:28.291 "is_configured": true, 00:16:28.291 "data_offset": 256, 00:16:28.291 "data_size": 7936 00:16:28.291 } 00:16:28.291 ] 00:16:28.291 }' 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.291 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.862 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:28.862 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.862 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.862 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.862 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.862 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.862 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.862 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.862 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.862 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:28.862 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.862 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.862 [2024-12-07 02:49:39.745257] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.862 [2024-12-07 02:49:39.745367] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.862 [2024-12-07 02:49:39.767891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.862 [2024-12-07 02:49:39.767942] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.862 [2024-12-07 02:49:39.767955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:28.862 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97798 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97798 ']' 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97798 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97798 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:28.863 killing process with pid 97798 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97798' 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97798 00:16:28.863 [2024-12-07 02:49:39.852859] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.863 02:49:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97798 00:16:28.863 [2024-12-07 02:49:39.854404] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.434 02:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:16:29.434 00:16:29.434 real 0m4.079s 00:16:29.434 user 0m6.143s 00:16:29.434 sys 0m0.993s 00:16:29.434 02:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:29.434 02:49:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.434 ************************************ 00:16:29.434 END TEST raid_state_function_test_sb_md_separate 00:16:29.434 ************************************ 00:16:29.434 02:49:40 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:16:29.434 02:49:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:29.434 02:49:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:29.434 02:49:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.434 ************************************ 00:16:29.434 START TEST raid_superblock_test_md_separate 00:16:29.434 ************************************ 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=98040 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 98040 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98040 ']' 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:29.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:29.435 02:49:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:29.435 [2024-12-07 02:49:40.408109] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:29.435 [2024-12-07 02:49:40.408238] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98040 ] 00:16:29.695 [2024-12-07 02:49:40.569608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.695 [2024-12-07 02:49:40.641405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.695 [2024-12-07 02:49:40.719255] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.695 [2024-12-07 02:49:40.719295] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.267 malloc1 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.267 [2024-12-07 02:49:41.255834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:30.267 [2024-12-07 02:49:41.255900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.267 [2024-12-07 02:49:41.255923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:30.267 [2024-12-07 02:49:41.255934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.267 [2024-12-07 02:49:41.258072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.267 [2024-12-07 02:49:41.258107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:30.267 pt1 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.267 malloc2 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.267 [2024-12-07 02:49:41.305127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.267 [2024-12-07 02:49:41.305232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.267 [2024-12-07 02:49:41.305269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:30.267 [2024-12-07 02:49:41.305292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.267 [2024-12-07 02:49:41.309341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.267 [2024-12-07 02:49:41.309401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.267 pt2 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.267 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.268 [2024-12-07 02:49:41.317681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:30.268 [2024-12-07 02:49:41.320381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:30.268 [2024-12-07 02:49:41.320606] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:30.268 [2024-12-07 02:49:41.320637] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:30.268 [2024-12-07 02:49:41.320755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:30.268 [2024-12-07 02:49:41.320915] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:30.268 [2024-12-07 02:49:41.320937] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:30.268 [2024-12-07 02:49:41.321081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.268 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.528 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.528 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.528 "name": "raid_bdev1", 00:16:30.528 "uuid": "e2d90645-9ad8-4396-9b8b-fe738ecd59d9", 00:16:30.528 "strip_size_kb": 0, 00:16:30.528 "state": "online", 00:16:30.528 "raid_level": "raid1", 00:16:30.528 "superblock": true, 00:16:30.528 "num_base_bdevs": 2, 00:16:30.528 "num_base_bdevs_discovered": 2, 00:16:30.528 "num_base_bdevs_operational": 2, 00:16:30.528 "base_bdevs_list": [ 00:16:30.528 { 00:16:30.528 "name": "pt1", 00:16:30.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.528 "is_configured": true, 00:16:30.528 "data_offset": 256, 00:16:30.528 "data_size": 7936 00:16:30.528 }, 00:16:30.528 { 00:16:30.528 "name": "pt2", 00:16:30.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.528 "is_configured": true, 00:16:30.528 "data_offset": 256, 00:16:30.528 "data_size": 7936 00:16:30.528 } 00:16:30.528 ] 00:16:30.528 }' 00:16:30.528 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.528 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:30.789 [2024-12-07 02:49:41.785067] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:30.789 "name": "raid_bdev1", 00:16:30.789 "aliases": [ 00:16:30.789 "e2d90645-9ad8-4396-9b8b-fe738ecd59d9" 00:16:30.789 ], 00:16:30.789 "product_name": "Raid Volume", 00:16:30.789 "block_size": 4096, 00:16:30.789 "num_blocks": 7936, 00:16:30.789 "uuid": "e2d90645-9ad8-4396-9b8b-fe738ecd59d9", 00:16:30.789 "md_size": 32, 00:16:30.789 "md_interleave": false, 00:16:30.789 "dif_type": 0, 00:16:30.789 "assigned_rate_limits": { 00:16:30.789 "rw_ios_per_sec": 0, 00:16:30.789 "rw_mbytes_per_sec": 0, 00:16:30.789 "r_mbytes_per_sec": 0, 00:16:30.789 "w_mbytes_per_sec": 0 00:16:30.789 }, 00:16:30.789 "claimed": false, 00:16:30.789 "zoned": false, 00:16:30.789 "supported_io_types": { 00:16:30.789 "read": true, 00:16:30.789 "write": true, 00:16:30.789 "unmap": false, 00:16:30.789 "flush": false, 00:16:30.789 "reset": true, 00:16:30.789 "nvme_admin": false, 00:16:30.789 "nvme_io": false, 00:16:30.789 "nvme_io_md": false, 00:16:30.789 "write_zeroes": true, 00:16:30.789 "zcopy": false, 00:16:30.789 "get_zone_info": false, 00:16:30.789 "zone_management": false, 00:16:30.789 "zone_append": false, 00:16:30.789 "compare": false, 00:16:30.789 "compare_and_write": false, 00:16:30.789 "abort": false, 00:16:30.789 "seek_hole": false, 00:16:30.789 "seek_data": false, 00:16:30.789 "copy": false, 00:16:30.789 "nvme_iov_md": false 00:16:30.789 }, 00:16:30.789 "memory_domains": [ 00:16:30.789 { 00:16:30.789 "dma_device_id": "system", 00:16:30.789 "dma_device_type": 1 00:16:30.789 }, 00:16:30.789 { 00:16:30.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.789 "dma_device_type": 2 00:16:30.789 }, 00:16:30.789 { 00:16:30.789 "dma_device_id": "system", 00:16:30.789 "dma_device_type": 1 00:16:30.789 }, 00:16:30.789 { 00:16:30.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.789 "dma_device_type": 2 00:16:30.789 } 00:16:30.789 ], 00:16:30.789 "driver_specific": { 00:16:30.789 "raid": { 00:16:30.789 "uuid": "e2d90645-9ad8-4396-9b8b-fe738ecd59d9", 00:16:30.789 "strip_size_kb": 0, 00:16:30.789 "state": "online", 00:16:30.789 "raid_level": "raid1", 00:16:30.789 "superblock": true, 00:16:30.789 "num_base_bdevs": 2, 00:16:30.789 "num_base_bdevs_discovered": 2, 00:16:30.789 "num_base_bdevs_operational": 2, 00:16:30.789 "base_bdevs_list": [ 00:16:30.789 { 00:16:30.789 "name": "pt1", 00:16:30.789 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:30.789 "is_configured": true, 00:16:30.789 "data_offset": 256, 00:16:30.789 "data_size": 7936 00:16:30.789 }, 00:16:30.789 { 00:16:30.789 "name": "pt2", 00:16:30.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.789 "is_configured": true, 00:16:30.789 "data_offset": 256, 00:16:30.789 "data_size": 7936 00:16:30.789 } 00:16:30.789 ] 00:16:30.789 } 00:16:30.789 } 00:16:30.789 }' 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:30.789 pt2' 00:16:30.789 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.050 02:49:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:31.050 [2024-12-07 02:49:42.016566] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e2d90645-9ad8-4396-9b8b-fe738ecd59d9 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z e2d90645-9ad8-4396-9b8b-fe738ecd59d9 ']' 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.050 [2024-12-07 02:49:42.052291] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.050 [2024-12-07 02:49:42.052323] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.050 [2024-12-07 02:49:42.052411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.050 [2024-12-07 02:49:42.052473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.050 [2024-12-07 02:49:42.052484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.050 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.311 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.312 [2024-12-07 02:49:42.196169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:31.312 [2024-12-07 02:49:42.198322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:31.312 [2024-12-07 02:49:42.198388] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:31.312 [2024-12-07 02:49:42.198426] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:31.312 [2024-12-07 02:49:42.198441] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.312 [2024-12-07 02:49:42.198450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:31.312 request: 00:16:31.312 { 00:16:31.312 "name": "raid_bdev1", 00:16:31.312 "raid_level": "raid1", 00:16:31.312 "base_bdevs": [ 00:16:31.312 "malloc1", 00:16:31.312 "malloc2" 00:16:31.312 ], 00:16:31.312 "superblock": false, 00:16:31.312 "method": "bdev_raid_create", 00:16:31.312 "req_id": 1 00:16:31.312 } 00:16:31.312 Got JSON-RPC error response 00:16:31.312 response: 00:16:31.312 { 00:16:31.312 "code": -17, 00:16:31.312 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:31.312 } 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.312 [2024-12-07 02:49:42.260093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:31.312 [2024-12-07 02:49:42.260130] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.312 [2024-12-07 02:49:42.260148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:31.312 [2024-12-07 02:49:42.260156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.312 [2024-12-07 02:49:42.262371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.312 [2024-12-07 02:49:42.262402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:31.312 [2024-12-07 02:49:42.262444] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:31.312 [2024-12-07 02:49:42.262475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:31.312 pt1 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.312 "name": "raid_bdev1", 00:16:31.312 "uuid": "e2d90645-9ad8-4396-9b8b-fe738ecd59d9", 00:16:31.312 "strip_size_kb": 0, 00:16:31.312 "state": "configuring", 00:16:31.312 "raid_level": "raid1", 00:16:31.312 "superblock": true, 00:16:31.312 "num_base_bdevs": 2, 00:16:31.312 "num_base_bdevs_discovered": 1, 00:16:31.312 "num_base_bdevs_operational": 2, 00:16:31.312 "base_bdevs_list": [ 00:16:31.312 { 00:16:31.312 "name": "pt1", 00:16:31.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.312 "is_configured": true, 00:16:31.312 "data_offset": 256, 00:16:31.312 "data_size": 7936 00:16:31.312 }, 00:16:31.312 { 00:16:31.312 "name": null, 00:16:31.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.312 "is_configured": false, 00:16:31.312 "data_offset": 256, 00:16:31.312 "data_size": 7936 00:16:31.312 } 00:16:31.312 ] 00:16:31.312 }' 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.312 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.883 [2024-12-07 02:49:42.711413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:31.883 [2024-12-07 02:49:42.711491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.883 [2024-12-07 02:49:42.711520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:31.883 [2024-12-07 02:49:42.711530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.883 [2024-12-07 02:49:42.711790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.883 [2024-12-07 02:49:42.711806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:31.883 [2024-12-07 02:49:42.711862] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:31.883 [2024-12-07 02:49:42.711884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:31.883 [2024-12-07 02:49:42.711987] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:31.883 [2024-12-07 02:49:42.711997] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:31.883 [2024-12-07 02:49:42.712076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:31.883 [2024-12-07 02:49:42.712156] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:31.883 [2024-12-07 02:49:42.712169] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:31.883 [2024-12-07 02:49:42.712240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.883 pt2 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.883 "name": "raid_bdev1", 00:16:31.883 "uuid": "e2d90645-9ad8-4396-9b8b-fe738ecd59d9", 00:16:31.883 "strip_size_kb": 0, 00:16:31.883 "state": "online", 00:16:31.883 "raid_level": "raid1", 00:16:31.883 "superblock": true, 00:16:31.883 "num_base_bdevs": 2, 00:16:31.883 "num_base_bdevs_discovered": 2, 00:16:31.883 "num_base_bdevs_operational": 2, 00:16:31.883 "base_bdevs_list": [ 00:16:31.883 { 00:16:31.883 "name": "pt1", 00:16:31.883 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.883 "is_configured": true, 00:16:31.883 "data_offset": 256, 00:16:31.883 "data_size": 7936 00:16:31.883 }, 00:16:31.883 { 00:16:31.883 "name": "pt2", 00:16:31.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.883 "is_configured": true, 00:16:31.883 "data_offset": 256, 00:16:31.883 "data_size": 7936 00:16:31.883 } 00:16:31.883 ] 00:16:31.883 }' 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.883 02:49:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.144 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:32.144 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:32.144 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.144 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.144 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.144 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.144 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.144 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.144 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.144 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.144 [2024-12-07 02:49:43.170839] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.144 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.144 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:32.144 "name": "raid_bdev1", 00:16:32.144 "aliases": [ 00:16:32.144 "e2d90645-9ad8-4396-9b8b-fe738ecd59d9" 00:16:32.144 ], 00:16:32.144 "product_name": "Raid Volume", 00:16:32.144 "block_size": 4096, 00:16:32.144 "num_blocks": 7936, 00:16:32.144 "uuid": "e2d90645-9ad8-4396-9b8b-fe738ecd59d9", 00:16:32.144 "md_size": 32, 00:16:32.144 "md_interleave": false, 00:16:32.144 "dif_type": 0, 00:16:32.144 "assigned_rate_limits": { 00:16:32.144 "rw_ios_per_sec": 0, 00:16:32.144 "rw_mbytes_per_sec": 0, 00:16:32.144 "r_mbytes_per_sec": 0, 00:16:32.144 "w_mbytes_per_sec": 0 00:16:32.144 }, 00:16:32.144 "claimed": false, 00:16:32.144 "zoned": false, 00:16:32.144 "supported_io_types": { 00:16:32.144 "read": true, 00:16:32.144 "write": true, 00:16:32.144 "unmap": false, 00:16:32.144 "flush": false, 00:16:32.144 "reset": true, 00:16:32.144 "nvme_admin": false, 00:16:32.144 "nvme_io": false, 00:16:32.144 "nvme_io_md": false, 00:16:32.144 "write_zeroes": true, 00:16:32.144 "zcopy": false, 00:16:32.144 "get_zone_info": false, 00:16:32.144 "zone_management": false, 00:16:32.144 "zone_append": false, 00:16:32.144 "compare": false, 00:16:32.144 "compare_and_write": false, 00:16:32.144 "abort": false, 00:16:32.144 "seek_hole": false, 00:16:32.144 "seek_data": false, 00:16:32.144 "copy": false, 00:16:32.144 "nvme_iov_md": false 00:16:32.144 }, 00:16:32.144 "memory_domains": [ 00:16:32.144 { 00:16:32.144 "dma_device_id": "system", 00:16:32.144 "dma_device_type": 1 00:16:32.144 }, 00:16:32.144 { 00:16:32.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.144 "dma_device_type": 2 00:16:32.144 }, 00:16:32.144 { 00:16:32.144 "dma_device_id": "system", 00:16:32.144 "dma_device_type": 1 00:16:32.144 }, 00:16:32.144 { 00:16:32.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.144 "dma_device_type": 2 00:16:32.144 } 00:16:32.144 ], 00:16:32.144 "driver_specific": { 00:16:32.144 "raid": { 00:16:32.144 "uuid": "e2d90645-9ad8-4396-9b8b-fe738ecd59d9", 00:16:32.144 "strip_size_kb": 0, 00:16:32.144 "state": "online", 00:16:32.144 "raid_level": "raid1", 00:16:32.145 "superblock": true, 00:16:32.145 "num_base_bdevs": 2, 00:16:32.145 "num_base_bdevs_discovered": 2, 00:16:32.145 "num_base_bdevs_operational": 2, 00:16:32.145 "base_bdevs_list": [ 00:16:32.145 { 00:16:32.145 "name": "pt1", 00:16:32.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:32.145 "is_configured": true, 00:16:32.145 "data_offset": 256, 00:16:32.145 "data_size": 7936 00:16:32.145 }, 00:16:32.145 { 00:16:32.145 "name": "pt2", 00:16:32.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.145 "is_configured": true, 00:16:32.145 "data_offset": 256, 00:16:32.145 "data_size": 7936 00:16:32.145 } 00:16:32.145 ] 00:16:32.145 } 00:16:32.145 } 00:16:32.145 }' 00:16:32.145 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:32.404 pt2' 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.404 [2024-12-07 02:49:43.410382] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' e2d90645-9ad8-4396-9b8b-fe738ecd59d9 '!=' e2d90645-9ad8-4396-9b8b-fe738ecd59d9 ']' 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.404 [2024-12-07 02:49:43.450108] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.404 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.664 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.664 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.664 "name": "raid_bdev1", 00:16:32.664 "uuid": "e2d90645-9ad8-4396-9b8b-fe738ecd59d9", 00:16:32.664 "strip_size_kb": 0, 00:16:32.664 "state": "online", 00:16:32.664 "raid_level": "raid1", 00:16:32.664 "superblock": true, 00:16:32.664 "num_base_bdevs": 2, 00:16:32.664 "num_base_bdevs_discovered": 1, 00:16:32.664 "num_base_bdevs_operational": 1, 00:16:32.664 "base_bdevs_list": [ 00:16:32.664 { 00:16:32.664 "name": null, 00:16:32.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.664 "is_configured": false, 00:16:32.664 "data_offset": 0, 00:16:32.664 "data_size": 7936 00:16:32.664 }, 00:16:32.664 { 00:16:32.664 "name": "pt2", 00:16:32.664 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.664 "is_configured": true, 00:16:32.664 "data_offset": 256, 00:16:32.664 "data_size": 7936 00:16:32.664 } 00:16:32.664 ] 00:16:32.664 }' 00:16:32.664 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.664 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.923 [2024-12-07 02:49:43.901279] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.923 [2024-12-07 02:49:43.901310] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.923 [2024-12-07 02:49:43.901383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.923 [2024-12-07 02:49:43.901429] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.923 [2024-12-07 02:49:43.901439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.923 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.923 [2024-12-07 02:49:43.973167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:32.923 [2024-12-07 02:49:43.973222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.923 [2024-12-07 02:49:43.973255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:32.923 [2024-12-07 02:49:43.973264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.923 [2024-12-07 02:49:43.975185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.923 [2024-12-07 02:49:43.975217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:32.923 [2024-12-07 02:49:43.975265] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:32.923 [2024-12-07 02:49:43.975295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.923 [2024-12-07 02:49:43.975356] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:16:32.923 [2024-12-07 02:49:43.975368] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:32.923 [2024-12-07 02:49:43.975440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:32.923 [2024-12-07 02:49:43.975518] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:16:32.923 [2024-12-07 02:49:43.975528] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:16:32.923 [2024-12-07 02:49:43.975602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.923 pt2 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.924 02:49:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.182 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.182 "name": "raid_bdev1", 00:16:33.182 "uuid": "e2d90645-9ad8-4396-9b8b-fe738ecd59d9", 00:16:33.182 "strip_size_kb": 0, 00:16:33.182 "state": "online", 00:16:33.182 "raid_level": "raid1", 00:16:33.182 "superblock": true, 00:16:33.182 "num_base_bdevs": 2, 00:16:33.182 "num_base_bdevs_discovered": 1, 00:16:33.182 "num_base_bdevs_operational": 1, 00:16:33.182 "base_bdevs_list": [ 00:16:33.182 { 00:16:33.182 "name": null, 00:16:33.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.182 "is_configured": false, 00:16:33.182 "data_offset": 256, 00:16:33.182 "data_size": 7936 00:16:33.182 }, 00:16:33.182 { 00:16:33.182 "name": "pt2", 00:16:33.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.182 "is_configured": true, 00:16:33.182 "data_offset": 256, 00:16:33.182 "data_size": 7936 00:16:33.182 } 00:16:33.182 ] 00:16:33.182 }' 00:16:33.182 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.182 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.442 [2024-12-07 02:49:44.452333] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.442 [2024-12-07 02:49:44.452361] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.442 [2024-12-07 02:49:44.452418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.442 [2024-12-07 02:49:44.452458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.442 [2024-12-07 02:49:44.452469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.442 [2024-12-07 02:49:44.512213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:33.442 [2024-12-07 02:49:44.512261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.442 [2024-12-07 02:49:44.512278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:33.442 [2024-12-07 02:49:44.512292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.442 [2024-12-07 02:49:44.514192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.442 [2024-12-07 02:49:44.514227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:33.442 [2024-12-07 02:49:44.514272] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:33.442 [2024-12-07 02:49:44.514310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:33.442 [2024-12-07 02:49:44.514406] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:33.442 [2024-12-07 02:49:44.514424] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.442 [2024-12-07 02:49:44.514445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:16:33.442 [2024-12-07 02:49:44.514477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.442 [2024-12-07 02:49:44.514528] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:33.442 [2024-12-07 02:49:44.514540] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:33.442 [2024-12-07 02:49:44.514625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:33.442 [2024-12-07 02:49:44.514699] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:33.442 [2024-12-07 02:49:44.514707] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:33.442 [2024-12-07 02:49:44.514780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.442 pt1 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.442 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.701 "name": "raid_bdev1", 00:16:33.701 "uuid": "e2d90645-9ad8-4396-9b8b-fe738ecd59d9", 00:16:33.701 "strip_size_kb": 0, 00:16:33.701 "state": "online", 00:16:33.701 "raid_level": "raid1", 00:16:33.701 "superblock": true, 00:16:33.701 "num_base_bdevs": 2, 00:16:33.701 "num_base_bdevs_discovered": 1, 00:16:33.701 "num_base_bdevs_operational": 1, 00:16:33.701 "base_bdevs_list": [ 00:16:33.701 { 00:16:33.701 "name": null, 00:16:33.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.701 "is_configured": false, 00:16:33.701 "data_offset": 256, 00:16:33.701 "data_size": 7936 00:16:33.701 }, 00:16:33.701 { 00:16:33.701 "name": "pt2", 00:16:33.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.701 "is_configured": true, 00:16:33.701 "data_offset": 256, 00:16:33.701 "data_size": 7936 00:16:33.701 } 00:16:33.701 ] 00:16:33.701 }' 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.701 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.960 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:33.960 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.960 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.960 02:49:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:33.960 02:49:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.960 02:49:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:33.960 02:49:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:33.960 02:49:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:33.960 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.960 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:33.960 [2024-12-07 02:49:45.031654] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:34.218 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.218 02:49:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' e2d90645-9ad8-4396-9b8b-fe738ecd59d9 '!=' e2d90645-9ad8-4396-9b8b-fe738ecd59d9 ']' 00:16:34.218 02:49:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 98040 00:16:34.218 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98040 ']' 00:16:34.218 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 98040 00:16:34.218 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:34.218 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:34.218 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98040 00:16:34.218 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:34.218 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:34.218 killing process with pid 98040 00:16:34.218 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98040' 00:16:34.218 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 98040 00:16:34.218 [2024-12-07 02:49:45.115948] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:34.218 [2024-12-07 02:49:45.116031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:34.218 [2024-12-07 02:49:45.116087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:34.218 [2024-12-07 02:49:45.116096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:34.218 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 98040 00:16:34.218 [2024-12-07 02:49:45.140843] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.477 02:49:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:16:34.477 00:16:34.477 real 0m5.073s 00:16:34.477 user 0m8.128s 00:16:34.477 sys 0m1.238s 00:16:34.477 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:34.477 02:49:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.477 ************************************ 00:16:34.477 END TEST raid_superblock_test_md_separate 00:16:34.477 ************************************ 00:16:34.477 02:49:45 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:16:34.477 02:49:45 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:16:34.477 02:49:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:34.477 02:49:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:34.477 02:49:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.477 ************************************ 00:16:34.477 START TEST raid_rebuild_test_sb_md_separate 00:16:34.477 ************************************ 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:34.477 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=98352 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 98352 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 98352 ']' 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:34.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:34.478 02:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:34.736 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:34.736 Zero copy mechanism will not be used. 00:16:34.736 [2024-12-07 02:49:45.568885] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:34.736 [2024-12-07 02:49:45.569034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98352 ] 00:16:34.736 [2024-12-07 02:49:45.731047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.736 [2024-12-07 02:49:45.776421] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.995 [2024-12-07 02:49:45.818838] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.995 [2024-12-07 02:49:45.818877] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.564 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:35.564 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:16:35.564 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.564 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:16:35.564 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.564 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.564 BaseBdev1_malloc 00:16:35.564 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.564 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:35.564 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.564 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.564 [2024-12-07 02:49:46.413089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:35.564 [2024-12-07 02:49:46.413169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.565 [2024-12-07 02:49:46.413199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:35.565 [2024-12-07 02:49:46.413210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.565 [2024-12-07 02:49:46.415127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.565 [2024-12-07 02:49:46.415166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:35.565 BaseBdev1 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.565 BaseBdev2_malloc 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.565 [2024-12-07 02:49:46.459322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:35.565 [2024-12-07 02:49:46.459428] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.565 [2024-12-07 02:49:46.459474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:35.565 [2024-12-07 02:49:46.459494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.565 [2024-12-07 02:49:46.463880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.565 [2024-12-07 02:49:46.463934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:35.565 BaseBdev2 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.565 spare_malloc 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.565 spare_delay 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.565 [2024-12-07 02:49:46.502738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.565 [2024-12-07 02:49:46.502810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.565 [2024-12-07 02:49:46.502831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:35.565 [2024-12-07 02:49:46.502842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.565 [2024-12-07 02:49:46.504691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.565 [2024-12-07 02:49:46.504727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.565 spare 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.565 [2024-12-07 02:49:46.514763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.565 [2024-12-07 02:49:46.516550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.565 [2024-12-07 02:49:46.516737] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:35.565 [2024-12-07 02:49:46.516754] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:35.565 [2024-12-07 02:49:46.516824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:35.565 [2024-12-07 02:49:46.516917] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:35.565 [2024-12-07 02:49:46.516937] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:35.565 [2024-12-07 02:49:46.517026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.565 "name": "raid_bdev1", 00:16:35.565 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:35.565 "strip_size_kb": 0, 00:16:35.565 "state": "online", 00:16:35.565 "raid_level": "raid1", 00:16:35.565 "superblock": true, 00:16:35.565 "num_base_bdevs": 2, 00:16:35.565 "num_base_bdevs_discovered": 2, 00:16:35.565 "num_base_bdevs_operational": 2, 00:16:35.565 "base_bdevs_list": [ 00:16:35.565 { 00:16:35.565 "name": "BaseBdev1", 00:16:35.565 "uuid": "b34468db-beb2-530b-b30b-9b97385d823f", 00:16:35.565 "is_configured": true, 00:16:35.565 "data_offset": 256, 00:16:35.565 "data_size": 7936 00:16:35.565 }, 00:16:35.565 { 00:16:35.565 "name": "BaseBdev2", 00:16:35.565 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:35.565 "is_configured": true, 00:16:35.565 "data_offset": 256, 00:16:35.565 "data_size": 7936 00:16:35.565 } 00:16:35.565 ] 00:16:35.565 }' 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.565 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.134 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.134 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.134 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:36.134 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.134 [2024-12-07 02:49:46.966239] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.134 02:49:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:36.134 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:36.394 [2024-12-07 02:49:47.233534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:36.394 /dev/nbd0 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:36.394 1+0 records in 00:16:36.394 1+0 records out 00:16:36.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037066 s, 11.1 MB/s 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:36.394 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:36.962 7936+0 records in 00:16:36.962 7936+0 records out 00:16:36.963 32505856 bytes (33 MB, 31 MiB) copied, 0.562067 s, 57.8 MB/s 00:16:36.963 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:36.963 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:36.963 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:36.963 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:36.963 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:36.963 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:36.963 02:49:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:37.222 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:37.222 [2024-12-07 02:49:48.089682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.222 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:37.222 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:37.222 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:37.222 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:37.222 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:37.222 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:37.222 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:37.222 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:37.222 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.222 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.222 [2024-12-07 02:49:48.107013] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:37.222 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.222 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.223 "name": "raid_bdev1", 00:16:37.223 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:37.223 "strip_size_kb": 0, 00:16:37.223 "state": "online", 00:16:37.223 "raid_level": "raid1", 00:16:37.223 "superblock": true, 00:16:37.223 "num_base_bdevs": 2, 00:16:37.223 "num_base_bdevs_discovered": 1, 00:16:37.223 "num_base_bdevs_operational": 1, 00:16:37.223 "base_bdevs_list": [ 00:16:37.223 { 00:16:37.223 "name": null, 00:16:37.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.223 "is_configured": false, 00:16:37.223 "data_offset": 0, 00:16:37.223 "data_size": 7936 00:16:37.223 }, 00:16:37.223 { 00:16:37.223 "name": "BaseBdev2", 00:16:37.223 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:37.223 "is_configured": true, 00:16:37.223 "data_offset": 256, 00:16:37.223 "data_size": 7936 00:16:37.223 } 00:16:37.223 ] 00:16:37.223 }' 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.223 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.482 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:37.482 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.482 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:37.482 [2024-12-07 02:49:48.554250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.482 [2024-12-07 02:49:48.557159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:16:37.741 [2024-12-07 02:49:48.559422] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.741 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.741 02:49:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.681 "name": "raid_bdev1", 00:16:38.681 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:38.681 "strip_size_kb": 0, 00:16:38.681 "state": "online", 00:16:38.681 "raid_level": "raid1", 00:16:38.681 "superblock": true, 00:16:38.681 "num_base_bdevs": 2, 00:16:38.681 "num_base_bdevs_discovered": 2, 00:16:38.681 "num_base_bdevs_operational": 2, 00:16:38.681 "process": { 00:16:38.681 "type": "rebuild", 00:16:38.681 "target": "spare", 00:16:38.681 "progress": { 00:16:38.681 "blocks": 2560, 00:16:38.681 "percent": 32 00:16:38.681 } 00:16:38.681 }, 00:16:38.681 "base_bdevs_list": [ 00:16:38.681 { 00:16:38.681 "name": "spare", 00:16:38.681 "uuid": "05b9b9b4-f336-5ab4-93e4-95a32d348682", 00:16:38.681 "is_configured": true, 00:16:38.681 "data_offset": 256, 00:16:38.681 "data_size": 7936 00:16:38.681 }, 00:16:38.681 { 00:16:38.681 "name": "BaseBdev2", 00:16:38.681 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:38.681 "is_configured": true, 00:16:38.681 "data_offset": 256, 00:16:38.681 "data_size": 7936 00:16:38.681 } 00:16:38.681 ] 00:16:38.681 }' 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.681 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.682 [2024-12-07 02:49:49.726176] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.942 [2024-12-07 02:49:49.767756] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:38.942 [2024-12-07 02:49:49.767816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.942 [2024-12-07 02:49:49.767836] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:38.942 [2024-12-07 02:49:49.767851] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.942 "name": "raid_bdev1", 00:16:38.942 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:38.942 "strip_size_kb": 0, 00:16:38.942 "state": "online", 00:16:38.942 "raid_level": "raid1", 00:16:38.942 "superblock": true, 00:16:38.942 "num_base_bdevs": 2, 00:16:38.942 "num_base_bdevs_discovered": 1, 00:16:38.942 "num_base_bdevs_operational": 1, 00:16:38.942 "base_bdevs_list": [ 00:16:38.942 { 00:16:38.942 "name": null, 00:16:38.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.942 "is_configured": false, 00:16:38.942 "data_offset": 0, 00:16:38.942 "data_size": 7936 00:16:38.942 }, 00:16:38.942 { 00:16:38.942 "name": "BaseBdev2", 00:16:38.942 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:38.942 "is_configured": true, 00:16:38.942 "data_offset": 256, 00:16:38.942 "data_size": 7936 00:16:38.942 } 00:16:38.942 ] 00:16:38.942 }' 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.942 02:49:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.202 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.202 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.203 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.203 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.203 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.203 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.203 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.203 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.203 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.203 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.463 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.463 "name": "raid_bdev1", 00:16:39.463 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:39.463 "strip_size_kb": 0, 00:16:39.463 "state": "online", 00:16:39.463 "raid_level": "raid1", 00:16:39.463 "superblock": true, 00:16:39.463 "num_base_bdevs": 2, 00:16:39.463 "num_base_bdevs_discovered": 1, 00:16:39.463 "num_base_bdevs_operational": 1, 00:16:39.463 "base_bdevs_list": [ 00:16:39.463 { 00:16:39.463 "name": null, 00:16:39.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.463 "is_configured": false, 00:16:39.463 "data_offset": 0, 00:16:39.463 "data_size": 7936 00:16:39.463 }, 00:16:39.463 { 00:16:39.463 "name": "BaseBdev2", 00:16:39.463 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:39.463 "is_configured": true, 00:16:39.463 "data_offset": 256, 00:16:39.463 "data_size": 7936 00:16:39.463 } 00:16:39.463 ] 00:16:39.463 }' 00:16:39.463 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.463 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.463 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.463 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.463 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:39.463 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.463 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:39.463 [2024-12-07 02:49:50.391850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.463 [2024-12-07 02:49:50.394489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:16:39.463 [2024-12-07 02:49:50.396678] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:39.463 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.463 02:49:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:40.404 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.404 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.404 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.404 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.404 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.404 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.404 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.404 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.404 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.404 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.404 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.404 "name": "raid_bdev1", 00:16:40.404 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:40.404 "strip_size_kb": 0, 00:16:40.404 "state": "online", 00:16:40.404 "raid_level": "raid1", 00:16:40.404 "superblock": true, 00:16:40.404 "num_base_bdevs": 2, 00:16:40.404 "num_base_bdevs_discovered": 2, 00:16:40.404 "num_base_bdevs_operational": 2, 00:16:40.404 "process": { 00:16:40.404 "type": "rebuild", 00:16:40.404 "target": "spare", 00:16:40.404 "progress": { 00:16:40.404 "blocks": 2560, 00:16:40.404 "percent": 32 00:16:40.404 } 00:16:40.404 }, 00:16:40.404 "base_bdevs_list": [ 00:16:40.404 { 00:16:40.404 "name": "spare", 00:16:40.404 "uuid": "05b9b9b4-f336-5ab4-93e4-95a32d348682", 00:16:40.404 "is_configured": true, 00:16:40.404 "data_offset": 256, 00:16:40.404 "data_size": 7936 00:16:40.404 }, 00:16:40.404 { 00:16:40.404 "name": "BaseBdev2", 00:16:40.404 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:40.404 "is_configured": true, 00:16:40.404 "data_offset": 256, 00:16:40.404 "data_size": 7936 00:16:40.404 } 00:16:40.404 ] 00:16:40.404 }' 00:16:40.404 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:40.664 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=604 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.664 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.665 "name": "raid_bdev1", 00:16:40.665 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:40.665 "strip_size_kb": 0, 00:16:40.665 "state": "online", 00:16:40.665 "raid_level": "raid1", 00:16:40.665 "superblock": true, 00:16:40.665 "num_base_bdevs": 2, 00:16:40.665 "num_base_bdevs_discovered": 2, 00:16:40.665 "num_base_bdevs_operational": 2, 00:16:40.665 "process": { 00:16:40.665 "type": "rebuild", 00:16:40.665 "target": "spare", 00:16:40.665 "progress": { 00:16:40.665 "blocks": 2816, 00:16:40.665 "percent": 35 00:16:40.665 } 00:16:40.665 }, 00:16:40.665 "base_bdevs_list": [ 00:16:40.665 { 00:16:40.665 "name": "spare", 00:16:40.665 "uuid": "05b9b9b4-f336-5ab4-93e4-95a32d348682", 00:16:40.665 "is_configured": true, 00:16:40.665 "data_offset": 256, 00:16:40.665 "data_size": 7936 00:16:40.665 }, 00:16:40.665 { 00:16:40.665 "name": "BaseBdev2", 00:16:40.665 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:40.665 "is_configured": true, 00:16:40.665 "data_offset": 256, 00:16:40.665 "data_size": 7936 00:16:40.665 } 00:16:40.665 ] 00:16:40.665 }' 00:16:40.665 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.665 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.665 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.665 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.665 02:49:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:42.046 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.046 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.046 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.046 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.046 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.046 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.046 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.046 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.046 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.046 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.046 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.046 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.046 "name": "raid_bdev1", 00:16:42.046 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:42.046 "strip_size_kb": 0, 00:16:42.046 "state": "online", 00:16:42.046 "raid_level": "raid1", 00:16:42.046 "superblock": true, 00:16:42.046 "num_base_bdevs": 2, 00:16:42.046 "num_base_bdevs_discovered": 2, 00:16:42.046 "num_base_bdevs_operational": 2, 00:16:42.046 "process": { 00:16:42.046 "type": "rebuild", 00:16:42.046 "target": "spare", 00:16:42.046 "progress": { 00:16:42.046 "blocks": 5632, 00:16:42.046 "percent": 70 00:16:42.047 } 00:16:42.047 }, 00:16:42.047 "base_bdevs_list": [ 00:16:42.047 { 00:16:42.047 "name": "spare", 00:16:42.047 "uuid": "05b9b9b4-f336-5ab4-93e4-95a32d348682", 00:16:42.047 "is_configured": true, 00:16:42.047 "data_offset": 256, 00:16:42.047 "data_size": 7936 00:16:42.047 }, 00:16:42.047 { 00:16:42.047 "name": "BaseBdev2", 00:16:42.047 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:42.047 "is_configured": true, 00:16:42.047 "data_offset": 256, 00:16:42.047 "data_size": 7936 00:16:42.047 } 00:16:42.047 ] 00:16:42.047 }' 00:16:42.047 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.047 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.047 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.047 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.047 02:49:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:42.616 [2024-12-07 02:49:53.516829] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:42.616 [2024-12-07 02:49:53.516922] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:42.616 [2024-12-07 02:49:53.517023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.876 "name": "raid_bdev1", 00:16:42.876 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:42.876 "strip_size_kb": 0, 00:16:42.876 "state": "online", 00:16:42.876 "raid_level": "raid1", 00:16:42.876 "superblock": true, 00:16:42.876 "num_base_bdevs": 2, 00:16:42.876 "num_base_bdevs_discovered": 2, 00:16:42.876 "num_base_bdevs_operational": 2, 00:16:42.876 "base_bdevs_list": [ 00:16:42.876 { 00:16:42.876 "name": "spare", 00:16:42.876 "uuid": "05b9b9b4-f336-5ab4-93e4-95a32d348682", 00:16:42.876 "is_configured": true, 00:16:42.876 "data_offset": 256, 00:16:42.876 "data_size": 7936 00:16:42.876 }, 00:16:42.876 { 00:16:42.876 "name": "BaseBdev2", 00:16:42.876 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:42.876 "is_configured": true, 00:16:42.876 "data_offset": 256, 00:16:42.876 "data_size": 7936 00:16:42.876 } 00:16:42.876 ] 00:16:42.876 }' 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.876 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:43.135 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.135 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:43.135 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:16:43.135 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:43.135 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.135 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:43.135 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:43.135 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.135 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.135 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.135 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.135 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.135 02:49:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.135 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.135 "name": "raid_bdev1", 00:16:43.135 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:43.135 "strip_size_kb": 0, 00:16:43.135 "state": "online", 00:16:43.135 "raid_level": "raid1", 00:16:43.135 "superblock": true, 00:16:43.135 "num_base_bdevs": 2, 00:16:43.135 "num_base_bdevs_discovered": 2, 00:16:43.135 "num_base_bdevs_operational": 2, 00:16:43.135 "base_bdevs_list": [ 00:16:43.135 { 00:16:43.135 "name": "spare", 00:16:43.135 "uuid": "05b9b9b4-f336-5ab4-93e4-95a32d348682", 00:16:43.135 "is_configured": true, 00:16:43.135 "data_offset": 256, 00:16:43.135 "data_size": 7936 00:16:43.135 }, 00:16:43.135 { 00:16:43.135 "name": "BaseBdev2", 00:16:43.135 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:43.135 "is_configured": true, 00:16:43.135 "data_offset": 256, 00:16:43.135 "data_size": 7936 00:16:43.135 } 00:16:43.135 ] 00:16:43.135 }' 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.136 "name": "raid_bdev1", 00:16:43.136 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:43.136 "strip_size_kb": 0, 00:16:43.136 "state": "online", 00:16:43.136 "raid_level": "raid1", 00:16:43.136 "superblock": true, 00:16:43.136 "num_base_bdevs": 2, 00:16:43.136 "num_base_bdevs_discovered": 2, 00:16:43.136 "num_base_bdevs_operational": 2, 00:16:43.136 "base_bdevs_list": [ 00:16:43.136 { 00:16:43.136 "name": "spare", 00:16:43.136 "uuid": "05b9b9b4-f336-5ab4-93e4-95a32d348682", 00:16:43.136 "is_configured": true, 00:16:43.136 "data_offset": 256, 00:16:43.136 "data_size": 7936 00:16:43.136 }, 00:16:43.136 { 00:16:43.136 "name": "BaseBdev2", 00:16:43.136 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:43.136 "is_configured": true, 00:16:43.136 "data_offset": 256, 00:16:43.136 "data_size": 7936 00:16:43.136 } 00:16:43.136 ] 00:16:43.136 }' 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.136 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.704 [2024-12-07 02:49:54.591635] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.704 [2024-12-07 02:49:54.591668] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.704 [2024-12-07 02:49:54.591760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.704 [2024-12-07 02:49:54.591837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.704 [2024-12-07 02:49:54.591850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:16:43.704 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:43.705 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:43.705 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:43.964 /dev/nbd0 00:16:43.964 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:43.964 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:43.964 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:43.964 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:43.964 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:43.964 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:43.964 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:43.964 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:43.964 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:43.965 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:43.965 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:43.965 1+0 records in 00:16:43.965 1+0 records out 00:16:43.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421578 s, 9.7 MB/s 00:16:43.965 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.965 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:43.965 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.965 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:43.965 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:43.965 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:43.965 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:43.965 02:49:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:44.225 /dev/nbd1 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.225 1+0 records in 00:16:44.225 1+0 records out 00:16:44.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041023 s, 10.0 MB/s 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.225 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:44.485 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:44.485 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:44.485 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:44.485 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:44.485 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:44.485 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:44.485 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:44.485 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:44.485 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.485 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.745 [2024-12-07 02:49:55.678425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:44.745 [2024-12-07 02:49:55.678488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.745 [2024-12-07 02:49:55.678508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:44.745 [2024-12-07 02:49:55.678520] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.745 [2024-12-07 02:49:55.680428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.745 [2024-12-07 02:49:55.680472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:44.745 [2024-12-07 02:49:55.680527] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:44.745 [2024-12-07 02:49:55.680571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.745 [2024-12-07 02:49:55.680709] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.745 spare 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.745 [2024-12-07 02:49:55.780603] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:16:44.745 [2024-12-07 02:49:55.780631] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:44.745 [2024-12-07 02:49:55.780754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:16:44.745 [2024-12-07 02:49:55.780867] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:16:44.745 [2024-12-07 02:49:55.780878] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:16:44.745 [2024-12-07 02:49:55.780968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:44.745 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.005 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.005 "name": "raid_bdev1", 00:16:45.005 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:45.005 "strip_size_kb": 0, 00:16:45.005 "state": "online", 00:16:45.005 "raid_level": "raid1", 00:16:45.005 "superblock": true, 00:16:45.005 "num_base_bdevs": 2, 00:16:45.005 "num_base_bdevs_discovered": 2, 00:16:45.005 "num_base_bdevs_operational": 2, 00:16:45.005 "base_bdevs_list": [ 00:16:45.005 { 00:16:45.005 "name": "spare", 00:16:45.005 "uuid": "05b9b9b4-f336-5ab4-93e4-95a32d348682", 00:16:45.005 "is_configured": true, 00:16:45.005 "data_offset": 256, 00:16:45.005 "data_size": 7936 00:16:45.005 }, 00:16:45.005 { 00:16:45.005 "name": "BaseBdev2", 00:16:45.005 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:45.005 "is_configured": true, 00:16:45.005 "data_offset": 256, 00:16:45.005 "data_size": 7936 00:16:45.005 } 00:16:45.005 ] 00:16:45.005 }' 00:16:45.005 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.005 02:49:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.264 "name": "raid_bdev1", 00:16:45.264 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:45.264 "strip_size_kb": 0, 00:16:45.264 "state": "online", 00:16:45.264 "raid_level": "raid1", 00:16:45.264 "superblock": true, 00:16:45.264 "num_base_bdevs": 2, 00:16:45.264 "num_base_bdevs_discovered": 2, 00:16:45.264 "num_base_bdevs_operational": 2, 00:16:45.264 "base_bdevs_list": [ 00:16:45.264 { 00:16:45.264 "name": "spare", 00:16:45.264 "uuid": "05b9b9b4-f336-5ab4-93e4-95a32d348682", 00:16:45.264 "is_configured": true, 00:16:45.264 "data_offset": 256, 00:16:45.264 "data_size": 7936 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "name": "BaseBdev2", 00:16:45.264 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:45.264 "is_configured": true, 00:16:45.264 "data_offset": 256, 00:16:45.264 "data_size": 7936 00:16:45.264 } 00:16:45.264 ] 00:16:45.264 }' 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:45.264 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.524 [2024-12-07 02:49:56.413165] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.524 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.525 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.525 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.525 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.525 "name": "raid_bdev1", 00:16:45.525 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:45.525 "strip_size_kb": 0, 00:16:45.525 "state": "online", 00:16:45.525 "raid_level": "raid1", 00:16:45.525 "superblock": true, 00:16:45.525 "num_base_bdevs": 2, 00:16:45.525 "num_base_bdevs_discovered": 1, 00:16:45.525 "num_base_bdevs_operational": 1, 00:16:45.525 "base_bdevs_list": [ 00:16:45.525 { 00:16:45.525 "name": null, 00:16:45.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.525 "is_configured": false, 00:16:45.525 "data_offset": 0, 00:16:45.525 "data_size": 7936 00:16:45.525 }, 00:16:45.525 { 00:16:45.525 "name": "BaseBdev2", 00:16:45.525 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:45.525 "is_configured": true, 00:16:45.525 "data_offset": 256, 00:16:45.525 "data_size": 7936 00:16:45.525 } 00:16:45.525 ] 00:16:45.525 }' 00:16:45.525 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.525 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.785 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:45.785 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.785 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:45.785 [2024-12-07 02:49:56.852466] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.785 [2024-12-07 02:49:56.852632] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:45.785 [2024-12-07 02:49:56.852669] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:45.785 [2024-12-07 02:49:56.852714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.785 [2024-12-07 02:49:56.854318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:16:45.785 [2024-12-07 02:49:56.856150] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.785 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.785 02:49:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:47.168 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.168 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.168 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.168 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.168 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.168 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.168 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.168 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.169 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.169 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.169 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.169 "name": "raid_bdev1", 00:16:47.169 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:47.169 "strip_size_kb": 0, 00:16:47.169 "state": "online", 00:16:47.169 "raid_level": "raid1", 00:16:47.169 "superblock": true, 00:16:47.169 "num_base_bdevs": 2, 00:16:47.169 "num_base_bdevs_discovered": 2, 00:16:47.169 "num_base_bdevs_operational": 2, 00:16:47.169 "process": { 00:16:47.169 "type": "rebuild", 00:16:47.169 "target": "spare", 00:16:47.169 "progress": { 00:16:47.169 "blocks": 2560, 00:16:47.169 "percent": 32 00:16:47.169 } 00:16:47.169 }, 00:16:47.169 "base_bdevs_list": [ 00:16:47.169 { 00:16:47.169 "name": "spare", 00:16:47.169 "uuid": "05b9b9b4-f336-5ab4-93e4-95a32d348682", 00:16:47.169 "is_configured": true, 00:16:47.169 "data_offset": 256, 00:16:47.169 "data_size": 7936 00:16:47.169 }, 00:16:47.169 { 00:16:47.169 "name": "BaseBdev2", 00:16:47.169 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:47.169 "is_configured": true, 00:16:47.169 "data_offset": 256, 00:16:47.169 "data_size": 7936 00:16:47.169 } 00:16:47.169 ] 00:16:47.169 }' 00:16:47.169 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.169 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.169 02:49:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.169 [2024-12-07 02:49:58.020201] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.169 [2024-12-07 02:49:58.060935] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:47.169 [2024-12-07 02:49:58.061006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.169 [2024-12-07 02:49:58.061022] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.169 [2024-12-07 02:49:58.061029] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.169 "name": "raid_bdev1", 00:16:47.169 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:47.169 "strip_size_kb": 0, 00:16:47.169 "state": "online", 00:16:47.169 "raid_level": "raid1", 00:16:47.169 "superblock": true, 00:16:47.169 "num_base_bdevs": 2, 00:16:47.169 "num_base_bdevs_discovered": 1, 00:16:47.169 "num_base_bdevs_operational": 1, 00:16:47.169 "base_bdevs_list": [ 00:16:47.169 { 00:16:47.169 "name": null, 00:16:47.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.169 "is_configured": false, 00:16:47.169 "data_offset": 0, 00:16:47.169 "data_size": 7936 00:16:47.169 }, 00:16:47.169 { 00:16:47.169 "name": "BaseBdev2", 00:16:47.169 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:47.169 "is_configured": true, 00:16:47.169 "data_offset": 256, 00:16:47.169 "data_size": 7936 00:16:47.169 } 00:16:47.169 ] 00:16:47.169 }' 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.169 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.739 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:47.739 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.739 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:47.739 [2024-12-07 02:49:58.523492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:47.739 [2024-12-07 02:49:58.523549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.739 [2024-12-07 02:49:58.523573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:47.739 [2024-12-07 02:49:58.523592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.739 [2024-12-07 02:49:58.523789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.739 [2024-12-07 02:49:58.523811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:47.739 [2024-12-07 02:49:58.523865] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:47.739 [2024-12-07 02:49:58.523876] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:47.739 [2024-12-07 02:49:58.523889] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:47.739 [2024-12-07 02:49:58.523914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.739 [2024-12-07 02:49:58.525210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:47.739 [2024-12-07 02:49:58.526969] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:47.739 spare 00:16:47.740 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.740 02:49:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.680 "name": "raid_bdev1", 00:16:48.680 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:48.680 "strip_size_kb": 0, 00:16:48.680 "state": "online", 00:16:48.680 "raid_level": "raid1", 00:16:48.680 "superblock": true, 00:16:48.680 "num_base_bdevs": 2, 00:16:48.680 "num_base_bdevs_discovered": 2, 00:16:48.680 "num_base_bdevs_operational": 2, 00:16:48.680 "process": { 00:16:48.680 "type": "rebuild", 00:16:48.680 "target": "spare", 00:16:48.680 "progress": { 00:16:48.680 "blocks": 2560, 00:16:48.680 "percent": 32 00:16:48.680 } 00:16:48.680 }, 00:16:48.680 "base_bdevs_list": [ 00:16:48.680 { 00:16:48.680 "name": "spare", 00:16:48.680 "uuid": "05b9b9b4-f336-5ab4-93e4-95a32d348682", 00:16:48.680 "is_configured": true, 00:16:48.680 "data_offset": 256, 00:16:48.680 "data_size": 7936 00:16:48.680 }, 00:16:48.680 { 00:16:48.680 "name": "BaseBdev2", 00:16:48.680 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:48.680 "is_configured": true, 00:16:48.680 "data_offset": 256, 00:16:48.680 "data_size": 7936 00:16:48.680 } 00:16:48.680 ] 00:16:48.680 }' 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.680 [2024-12-07 02:49:59.670079] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.680 [2024-12-07 02:49:59.730837] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:48.680 [2024-12-07 02:49:59.730895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.680 [2024-12-07 02:49:59.730909] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.680 [2024-12-07 02:49:59.730917] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.680 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:48.681 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.681 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.681 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.681 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.681 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.681 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.681 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.681 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:48.941 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.941 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.941 "name": "raid_bdev1", 00:16:48.941 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:48.941 "strip_size_kb": 0, 00:16:48.941 "state": "online", 00:16:48.941 "raid_level": "raid1", 00:16:48.941 "superblock": true, 00:16:48.941 "num_base_bdevs": 2, 00:16:48.941 "num_base_bdevs_discovered": 1, 00:16:48.941 "num_base_bdevs_operational": 1, 00:16:48.941 "base_bdevs_list": [ 00:16:48.941 { 00:16:48.941 "name": null, 00:16:48.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.941 "is_configured": false, 00:16:48.941 "data_offset": 0, 00:16:48.941 "data_size": 7936 00:16:48.941 }, 00:16:48.941 { 00:16:48.941 "name": "BaseBdev2", 00:16:48.941 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:48.941 "is_configured": true, 00:16:48.941 "data_offset": 256, 00:16:48.941 "data_size": 7936 00:16:48.941 } 00:16:48.941 ] 00:16:48.941 }' 00:16:48.941 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.941 02:49:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:49.201 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:49.201 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.201 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:49.201 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:49.201 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.201 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.201 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.201 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.201 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:49.201 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.201 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.201 "name": "raid_bdev1", 00:16:49.201 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:49.201 "strip_size_kb": 0, 00:16:49.201 "state": "online", 00:16:49.201 "raid_level": "raid1", 00:16:49.201 "superblock": true, 00:16:49.201 "num_base_bdevs": 2, 00:16:49.201 "num_base_bdevs_discovered": 1, 00:16:49.201 "num_base_bdevs_operational": 1, 00:16:49.201 "base_bdevs_list": [ 00:16:49.201 { 00:16:49.201 "name": null, 00:16:49.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.201 "is_configured": false, 00:16:49.201 "data_offset": 0, 00:16:49.201 "data_size": 7936 00:16:49.201 }, 00:16:49.201 { 00:16:49.201 "name": "BaseBdev2", 00:16:49.201 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:49.201 "is_configured": true, 00:16:49.201 "data_offset": 256, 00:16:49.201 "data_size": 7936 00:16:49.201 } 00:16:49.201 ] 00:16:49.201 }' 00:16:49.201 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.461 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.461 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.461 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.461 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:49.461 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.461 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:49.461 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.461 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:49.461 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.461 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:49.461 [2024-12-07 02:50:00.385393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:49.461 [2024-12-07 02:50:00.385465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.461 [2024-12-07 02:50:00.385482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:49.461 [2024-12-07 02:50:00.385492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.461 [2024-12-07 02:50:00.385673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.461 [2024-12-07 02:50:00.385691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:49.461 [2024-12-07 02:50:00.385732] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:49.461 [2024-12-07 02:50:00.385754] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:49.461 [2024-12-07 02:50:00.385765] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:49.461 [2024-12-07 02:50:00.385776] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:49.461 BaseBdev1 00:16:49.461 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.461 02:50:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.401 "name": "raid_bdev1", 00:16:50.401 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:50.401 "strip_size_kb": 0, 00:16:50.401 "state": "online", 00:16:50.401 "raid_level": "raid1", 00:16:50.401 "superblock": true, 00:16:50.401 "num_base_bdevs": 2, 00:16:50.401 "num_base_bdevs_discovered": 1, 00:16:50.401 "num_base_bdevs_operational": 1, 00:16:50.401 "base_bdevs_list": [ 00:16:50.401 { 00:16:50.401 "name": null, 00:16:50.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.401 "is_configured": false, 00:16:50.401 "data_offset": 0, 00:16:50.401 "data_size": 7936 00:16:50.401 }, 00:16:50.401 { 00:16:50.401 "name": "BaseBdev2", 00:16:50.401 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:50.401 "is_configured": true, 00:16:50.401 "data_offset": 256, 00:16:50.401 "data_size": 7936 00:16:50.401 } 00:16:50.401 ] 00:16:50.401 }' 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.401 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:50.971 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.971 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.971 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.971 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.971 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.971 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.971 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.972 "name": "raid_bdev1", 00:16:50.972 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:50.972 "strip_size_kb": 0, 00:16:50.972 "state": "online", 00:16:50.972 "raid_level": "raid1", 00:16:50.972 "superblock": true, 00:16:50.972 "num_base_bdevs": 2, 00:16:50.972 "num_base_bdevs_discovered": 1, 00:16:50.972 "num_base_bdevs_operational": 1, 00:16:50.972 "base_bdevs_list": [ 00:16:50.972 { 00:16:50.972 "name": null, 00:16:50.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.972 "is_configured": false, 00:16:50.972 "data_offset": 0, 00:16:50.972 "data_size": 7936 00:16:50.972 }, 00:16:50.972 { 00:16:50.972 "name": "BaseBdev2", 00:16:50.972 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:50.972 "is_configured": true, 00:16:50.972 "data_offset": 256, 00:16:50.972 "data_size": 7936 00:16:50.972 } 00:16:50.972 ] 00:16:50.972 }' 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:50.972 [2024-12-07 02:50:01.962687] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.972 [2024-12-07 02:50:01.962842] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:50.972 [2024-12-07 02:50:01.962855] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:50.972 request: 00:16:50.972 { 00:16:50.972 "base_bdev": "BaseBdev1", 00:16:50.972 "raid_bdev": "raid_bdev1", 00:16:50.972 "method": "bdev_raid_add_base_bdev", 00:16:50.972 "req_id": 1 00:16:50.972 } 00:16:50.972 Got JSON-RPC error response 00:16:50.972 response: 00:16:50.972 { 00:16:50.972 "code": -22, 00:16:50.972 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:50.972 } 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:50.972 02:50:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:51.919 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:51.919 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.919 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.919 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.919 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.919 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:51.919 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.919 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.919 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.919 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.920 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.920 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.920 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:51.920 02:50:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.179 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.179 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.179 "name": "raid_bdev1", 00:16:52.179 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:52.179 "strip_size_kb": 0, 00:16:52.179 "state": "online", 00:16:52.179 "raid_level": "raid1", 00:16:52.179 "superblock": true, 00:16:52.179 "num_base_bdevs": 2, 00:16:52.179 "num_base_bdevs_discovered": 1, 00:16:52.179 "num_base_bdevs_operational": 1, 00:16:52.179 "base_bdevs_list": [ 00:16:52.179 { 00:16:52.179 "name": null, 00:16:52.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.179 "is_configured": false, 00:16:52.179 "data_offset": 0, 00:16:52.179 "data_size": 7936 00:16:52.179 }, 00:16:52.179 { 00:16:52.179 "name": "BaseBdev2", 00:16:52.179 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:52.179 "is_configured": true, 00:16:52.179 "data_offset": 256, 00:16:52.179 "data_size": 7936 00:16:52.179 } 00:16:52.179 ] 00:16:52.179 }' 00:16:52.179 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.179 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.439 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.439 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.439 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.439 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.439 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.439 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.439 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.439 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.439 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.439 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.439 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.439 "name": "raid_bdev1", 00:16:52.439 "uuid": "21507707-76d7-488a-834f-abf5e8b53791", 00:16:52.439 "strip_size_kb": 0, 00:16:52.439 "state": "online", 00:16:52.439 "raid_level": "raid1", 00:16:52.439 "superblock": true, 00:16:52.439 "num_base_bdevs": 2, 00:16:52.439 "num_base_bdevs_discovered": 1, 00:16:52.439 "num_base_bdevs_operational": 1, 00:16:52.439 "base_bdevs_list": [ 00:16:52.439 { 00:16:52.439 "name": null, 00:16:52.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.439 "is_configured": false, 00:16:52.439 "data_offset": 0, 00:16:52.439 "data_size": 7936 00:16:52.439 }, 00:16:52.439 { 00:16:52.439 "name": "BaseBdev2", 00:16:52.439 "uuid": "5b569300-bc82-51ea-88d1-f69072022f22", 00:16:52.439 "is_configured": true, 00:16:52.439 "data_offset": 256, 00:16:52.439 "data_size": 7936 00:16:52.439 } 00:16:52.439 ] 00:16:52.439 }' 00:16:52.439 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 98352 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 98352 ']' 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 98352 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98352 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.699 killing process with pid 98352 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98352' 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 98352 00:16:52.699 Received shutdown signal, test time was about 60.000000 seconds 00:16:52.699 00:16:52.699 Latency(us) 00:16:52.699 [2024-12-07T02:50:03.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.699 [2024-12-07T02:50:03.777Z] =================================================================================================================== 00:16:52.699 [2024-12-07T02:50:03.777Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:52.699 [2024-12-07 02:50:03.631061] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.699 [2024-12-07 02:50:03.631197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.699 [2024-12-07 02:50:03.631267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.699 [2024-12-07 02:50:03.631277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:16:52.699 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 98352 00:16:52.699 [2024-12-07 02:50:03.664954] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.960 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:16:52.960 00:16:52.960 real 0m18.424s 00:16:52.960 user 0m24.551s 00:16:52.960 sys 0m2.689s 00:16:52.960 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.960 02:50:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:52.960 ************************************ 00:16:52.960 END TEST raid_rebuild_test_sb_md_separate 00:16:52.960 ************************************ 00:16:52.960 02:50:03 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:16:52.960 02:50:03 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:16:52.960 02:50:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:52.960 02:50:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.960 02:50:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.960 ************************************ 00:16:52.960 START TEST raid_state_function_test_sb_md_interleaved 00:16:52.960 ************************************ 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=99041 00:16:52.960 Process raid pid: 99041 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 99041' 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 99041 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99041 ']' 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.960 02:50:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:53.220 [2024-12-07 02:50:04.068839] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:53.220 [2024-12-07 02:50:04.068979] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.220 [2024-12-07 02:50:04.231210] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.220 [2024-12-07 02:50:04.277284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.480 [2024-12-07 02:50:04.319471] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.480 [2024-12-07 02:50:04.319512] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.048 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:54.048 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:54.048 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:54.048 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.048 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.048 [2024-12-07 02:50:04.884740] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.048 [2024-12-07 02:50:04.884787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.048 [2024-12-07 02:50:04.884799] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.048 [2024-12-07 02:50:04.884807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.049 "name": "Existed_Raid", 00:16:54.049 "uuid": "1c88d288-d772-48f5-bac4-32a60ff093a5", 00:16:54.049 "strip_size_kb": 0, 00:16:54.049 "state": "configuring", 00:16:54.049 "raid_level": "raid1", 00:16:54.049 "superblock": true, 00:16:54.049 "num_base_bdevs": 2, 00:16:54.049 "num_base_bdevs_discovered": 0, 00:16:54.049 "num_base_bdevs_operational": 2, 00:16:54.049 "base_bdevs_list": [ 00:16:54.049 { 00:16:54.049 "name": "BaseBdev1", 00:16:54.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.049 "is_configured": false, 00:16:54.049 "data_offset": 0, 00:16:54.049 "data_size": 0 00:16:54.049 }, 00:16:54.049 { 00:16:54.049 "name": "BaseBdev2", 00:16:54.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.049 "is_configured": false, 00:16:54.049 "data_offset": 0, 00:16:54.049 "data_size": 0 00:16:54.049 } 00:16:54.049 ] 00:16:54.049 }' 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.049 02:50:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.308 [2024-12-07 02:50:05.284126] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.308 [2024-12-07 02:50:05.284169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name Existed_Raid, state configuring 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.308 [2024-12-07 02:50:05.296166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.308 [2024-12-07 02:50:05.296203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.308 [2024-12-07 02:50:05.296211] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.308 [2024-12-07 02:50:05.296220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.308 [2024-12-07 02:50:05.316985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.308 BaseBdev1 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.308 [ 00:16:54.308 { 00:16:54.308 "name": "BaseBdev1", 00:16:54.308 "aliases": [ 00:16:54.308 "94ccb0ff-1997-4d0d-8eae-77f394c6fd4a" 00:16:54.308 ], 00:16:54.308 "product_name": "Malloc disk", 00:16:54.308 "block_size": 4128, 00:16:54.308 "num_blocks": 8192, 00:16:54.308 "uuid": "94ccb0ff-1997-4d0d-8eae-77f394c6fd4a", 00:16:54.308 "md_size": 32, 00:16:54.308 "md_interleave": true, 00:16:54.308 "dif_type": 0, 00:16:54.308 "assigned_rate_limits": { 00:16:54.308 "rw_ios_per_sec": 0, 00:16:54.308 "rw_mbytes_per_sec": 0, 00:16:54.308 "r_mbytes_per_sec": 0, 00:16:54.308 "w_mbytes_per_sec": 0 00:16:54.308 }, 00:16:54.308 "claimed": true, 00:16:54.308 "claim_type": "exclusive_write", 00:16:54.308 "zoned": false, 00:16:54.308 "supported_io_types": { 00:16:54.308 "read": true, 00:16:54.308 "write": true, 00:16:54.308 "unmap": true, 00:16:54.308 "flush": true, 00:16:54.308 "reset": true, 00:16:54.308 "nvme_admin": false, 00:16:54.308 "nvme_io": false, 00:16:54.308 "nvme_io_md": false, 00:16:54.308 "write_zeroes": true, 00:16:54.308 "zcopy": true, 00:16:54.308 "get_zone_info": false, 00:16:54.308 "zone_management": false, 00:16:54.308 "zone_append": false, 00:16:54.308 "compare": false, 00:16:54.308 "compare_and_write": false, 00:16:54.308 "abort": true, 00:16:54.308 "seek_hole": false, 00:16:54.308 "seek_data": false, 00:16:54.308 "copy": true, 00:16:54.308 "nvme_iov_md": false 00:16:54.308 }, 00:16:54.308 "memory_domains": [ 00:16:54.308 { 00:16:54.308 "dma_device_id": "system", 00:16:54.308 "dma_device_type": 1 00:16:54.308 }, 00:16:54.308 { 00:16:54.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.308 "dma_device_type": 2 00:16:54.308 } 00:16:54.308 ], 00:16:54.308 "driver_specific": {} 00:16:54.308 } 00:16:54.308 ] 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.308 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.309 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.309 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.567 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.567 "name": "Existed_Raid", 00:16:54.567 "uuid": "abb66810-f0b3-49a9-84e9-5dffa6790396", 00:16:54.567 "strip_size_kb": 0, 00:16:54.567 "state": "configuring", 00:16:54.567 "raid_level": "raid1", 00:16:54.567 "superblock": true, 00:16:54.567 "num_base_bdevs": 2, 00:16:54.567 "num_base_bdevs_discovered": 1, 00:16:54.567 "num_base_bdevs_operational": 2, 00:16:54.567 "base_bdevs_list": [ 00:16:54.567 { 00:16:54.567 "name": "BaseBdev1", 00:16:54.567 "uuid": "94ccb0ff-1997-4d0d-8eae-77f394c6fd4a", 00:16:54.567 "is_configured": true, 00:16:54.567 "data_offset": 256, 00:16:54.567 "data_size": 7936 00:16:54.568 }, 00:16:54.568 { 00:16:54.568 "name": "BaseBdev2", 00:16:54.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.568 "is_configured": false, 00:16:54.568 "data_offset": 0, 00:16:54.568 "data_size": 0 00:16:54.568 } 00:16:54.568 ] 00:16:54.568 }' 00:16:54.568 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.568 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.826 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:54.826 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.826 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.826 [2024-12-07 02:50:05.768229] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.826 [2024-12-07 02:50:05.768273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name Existed_Raid, state configuring 00:16:54.826 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.826 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:54.826 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.826 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.826 [2024-12-07 02:50:05.780288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.826 [2024-12-07 02:50:05.782008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.826 [2024-12-07 02:50:05.782051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.826 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.826 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:54.826 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.827 "name": "Existed_Raid", 00:16:54.827 "uuid": "771b0aca-4c1e-4f0f-8374-fd9f005b2c67", 00:16:54.827 "strip_size_kb": 0, 00:16:54.827 "state": "configuring", 00:16:54.827 "raid_level": "raid1", 00:16:54.827 "superblock": true, 00:16:54.827 "num_base_bdevs": 2, 00:16:54.827 "num_base_bdevs_discovered": 1, 00:16:54.827 "num_base_bdevs_operational": 2, 00:16:54.827 "base_bdevs_list": [ 00:16:54.827 { 00:16:54.827 "name": "BaseBdev1", 00:16:54.827 "uuid": "94ccb0ff-1997-4d0d-8eae-77f394c6fd4a", 00:16:54.827 "is_configured": true, 00:16:54.827 "data_offset": 256, 00:16:54.827 "data_size": 7936 00:16:54.827 }, 00:16:54.827 { 00:16:54.827 "name": "BaseBdev2", 00:16:54.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.827 "is_configured": false, 00:16:54.827 "data_offset": 0, 00:16:54.827 "data_size": 0 00:16:54.827 } 00:16:54.827 ] 00:16:54.827 }' 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.827 02:50:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.396 [2024-12-07 02:50:06.251336] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.396 [2024-12-07 02:50:06.251888] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:55.396 [2024-12-07 02:50:06.251954] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:55.396 [2024-12-07 02:50:06.252328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:55.396 BaseBdev2 00:16:55.396 [2024-12-07 02:50:06.252576] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:55.396 [2024-12-07 02:50:06.252681] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000006980 00:16:55.396 [2024-12-07 02:50:06.252870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.396 [ 00:16:55.396 { 00:16:55.396 "name": "BaseBdev2", 00:16:55.396 "aliases": [ 00:16:55.396 "d3603733-eb2b-4bf5-8ef5-331e1d9b3276" 00:16:55.396 ], 00:16:55.396 "product_name": "Malloc disk", 00:16:55.396 "block_size": 4128, 00:16:55.396 "num_blocks": 8192, 00:16:55.396 "uuid": "d3603733-eb2b-4bf5-8ef5-331e1d9b3276", 00:16:55.396 "md_size": 32, 00:16:55.396 "md_interleave": true, 00:16:55.396 "dif_type": 0, 00:16:55.396 "assigned_rate_limits": { 00:16:55.396 "rw_ios_per_sec": 0, 00:16:55.396 "rw_mbytes_per_sec": 0, 00:16:55.396 "r_mbytes_per_sec": 0, 00:16:55.396 "w_mbytes_per_sec": 0 00:16:55.396 }, 00:16:55.396 "claimed": true, 00:16:55.396 "claim_type": "exclusive_write", 00:16:55.396 "zoned": false, 00:16:55.396 "supported_io_types": { 00:16:55.396 "read": true, 00:16:55.396 "write": true, 00:16:55.396 "unmap": true, 00:16:55.396 "flush": true, 00:16:55.396 "reset": true, 00:16:55.396 "nvme_admin": false, 00:16:55.396 "nvme_io": false, 00:16:55.396 "nvme_io_md": false, 00:16:55.396 "write_zeroes": true, 00:16:55.396 "zcopy": true, 00:16:55.396 "get_zone_info": false, 00:16:55.396 "zone_management": false, 00:16:55.396 "zone_append": false, 00:16:55.396 "compare": false, 00:16:55.396 "compare_and_write": false, 00:16:55.396 "abort": true, 00:16:55.396 "seek_hole": false, 00:16:55.396 "seek_data": false, 00:16:55.396 "copy": true, 00:16:55.396 "nvme_iov_md": false 00:16:55.396 }, 00:16:55.396 "memory_domains": [ 00:16:55.396 { 00:16:55.396 "dma_device_id": "system", 00:16:55.396 "dma_device_type": 1 00:16:55.396 }, 00:16:55.396 { 00:16:55.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.396 "dma_device_type": 2 00:16:55.396 } 00:16:55.396 ], 00:16:55.396 "driver_specific": {} 00:16:55.396 } 00:16:55.396 ] 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.396 "name": "Existed_Raid", 00:16:55.396 "uuid": "771b0aca-4c1e-4f0f-8374-fd9f005b2c67", 00:16:55.396 "strip_size_kb": 0, 00:16:55.396 "state": "online", 00:16:55.396 "raid_level": "raid1", 00:16:55.396 "superblock": true, 00:16:55.396 "num_base_bdevs": 2, 00:16:55.396 "num_base_bdevs_discovered": 2, 00:16:55.396 "num_base_bdevs_operational": 2, 00:16:55.396 "base_bdevs_list": [ 00:16:55.396 { 00:16:55.396 "name": "BaseBdev1", 00:16:55.396 "uuid": "94ccb0ff-1997-4d0d-8eae-77f394c6fd4a", 00:16:55.396 "is_configured": true, 00:16:55.396 "data_offset": 256, 00:16:55.396 "data_size": 7936 00:16:55.396 }, 00:16:55.396 { 00:16:55.396 "name": "BaseBdev2", 00:16:55.396 "uuid": "d3603733-eb2b-4bf5-8ef5-331e1d9b3276", 00:16:55.396 "is_configured": true, 00:16:55.396 "data_offset": 256, 00:16:55.396 "data_size": 7936 00:16:55.396 } 00:16:55.396 ] 00:16:55.396 }' 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.396 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.965 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:55.965 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:55.965 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:55.965 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:55.965 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:55.965 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:55.965 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:55.965 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.965 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:55.965 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.965 [2024-12-07 02:50:06.754903] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.965 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.965 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:55.965 "name": "Existed_Raid", 00:16:55.965 "aliases": [ 00:16:55.965 "771b0aca-4c1e-4f0f-8374-fd9f005b2c67" 00:16:55.965 ], 00:16:55.965 "product_name": "Raid Volume", 00:16:55.965 "block_size": 4128, 00:16:55.965 "num_blocks": 7936, 00:16:55.965 "uuid": "771b0aca-4c1e-4f0f-8374-fd9f005b2c67", 00:16:55.965 "md_size": 32, 00:16:55.965 "md_interleave": true, 00:16:55.965 "dif_type": 0, 00:16:55.965 "assigned_rate_limits": { 00:16:55.965 "rw_ios_per_sec": 0, 00:16:55.965 "rw_mbytes_per_sec": 0, 00:16:55.965 "r_mbytes_per_sec": 0, 00:16:55.965 "w_mbytes_per_sec": 0 00:16:55.965 }, 00:16:55.965 "claimed": false, 00:16:55.965 "zoned": false, 00:16:55.965 "supported_io_types": { 00:16:55.965 "read": true, 00:16:55.965 "write": true, 00:16:55.965 "unmap": false, 00:16:55.965 "flush": false, 00:16:55.965 "reset": true, 00:16:55.965 "nvme_admin": false, 00:16:55.965 "nvme_io": false, 00:16:55.965 "nvme_io_md": false, 00:16:55.965 "write_zeroes": true, 00:16:55.965 "zcopy": false, 00:16:55.965 "get_zone_info": false, 00:16:55.965 "zone_management": false, 00:16:55.965 "zone_append": false, 00:16:55.965 "compare": false, 00:16:55.965 "compare_and_write": false, 00:16:55.965 "abort": false, 00:16:55.965 "seek_hole": false, 00:16:55.965 "seek_data": false, 00:16:55.965 "copy": false, 00:16:55.965 "nvme_iov_md": false 00:16:55.965 }, 00:16:55.965 "memory_domains": [ 00:16:55.965 { 00:16:55.965 "dma_device_id": "system", 00:16:55.965 "dma_device_type": 1 00:16:55.965 }, 00:16:55.965 { 00:16:55.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.965 "dma_device_type": 2 00:16:55.965 }, 00:16:55.965 { 00:16:55.965 "dma_device_id": "system", 00:16:55.965 "dma_device_type": 1 00:16:55.965 }, 00:16:55.965 { 00:16:55.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.965 "dma_device_type": 2 00:16:55.966 } 00:16:55.966 ], 00:16:55.966 "driver_specific": { 00:16:55.966 "raid": { 00:16:55.966 "uuid": "771b0aca-4c1e-4f0f-8374-fd9f005b2c67", 00:16:55.966 "strip_size_kb": 0, 00:16:55.966 "state": "online", 00:16:55.966 "raid_level": "raid1", 00:16:55.966 "superblock": true, 00:16:55.966 "num_base_bdevs": 2, 00:16:55.966 "num_base_bdevs_discovered": 2, 00:16:55.966 "num_base_bdevs_operational": 2, 00:16:55.966 "base_bdevs_list": [ 00:16:55.966 { 00:16:55.966 "name": "BaseBdev1", 00:16:55.966 "uuid": "94ccb0ff-1997-4d0d-8eae-77f394c6fd4a", 00:16:55.966 "is_configured": true, 00:16:55.966 "data_offset": 256, 00:16:55.966 "data_size": 7936 00:16:55.966 }, 00:16:55.966 { 00:16:55.966 "name": "BaseBdev2", 00:16:55.966 "uuid": "d3603733-eb2b-4bf5-8ef5-331e1d9b3276", 00:16:55.966 "is_configured": true, 00:16:55.966 "data_offset": 256, 00:16:55.966 "data_size": 7936 00:16:55.966 } 00:16:55.966 ] 00:16:55.966 } 00:16:55.966 } 00:16:55.966 }' 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:55.966 BaseBdev2' 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.966 [2024-12-07 02:50:06.930394] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.966 "name": "Existed_Raid", 00:16:55.966 "uuid": "771b0aca-4c1e-4f0f-8374-fd9f005b2c67", 00:16:55.966 "strip_size_kb": 0, 00:16:55.966 "state": "online", 00:16:55.966 "raid_level": "raid1", 00:16:55.966 "superblock": true, 00:16:55.966 "num_base_bdevs": 2, 00:16:55.966 "num_base_bdevs_discovered": 1, 00:16:55.966 "num_base_bdevs_operational": 1, 00:16:55.966 "base_bdevs_list": [ 00:16:55.966 { 00:16:55.966 "name": null, 00:16:55.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.966 "is_configured": false, 00:16:55.966 "data_offset": 0, 00:16:55.966 "data_size": 7936 00:16:55.966 }, 00:16:55.966 { 00:16:55.966 "name": "BaseBdev2", 00:16:55.966 "uuid": "d3603733-eb2b-4bf5-8ef5-331e1d9b3276", 00:16:55.966 "is_configured": true, 00:16:55.966 "data_offset": 256, 00:16:55.966 "data_size": 7936 00:16:55.966 } 00:16:55.966 ] 00:16:55.966 }' 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.966 02:50:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.537 [2024-12-07 02:50:07.457204] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:56.537 [2024-12-07 02:50:07.457295] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.537 [2024-12-07 02:50:07.469042] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.537 [2024-12-07 02:50:07.469094] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.537 [2024-12-07 02:50:07.469113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name Existed_Raid, state offline 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 99041 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99041 ']' 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99041 00:16:56.537 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:56.538 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:56.538 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99041 00:16:56.538 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:56.538 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:56.538 killing process with pid 99041 00:16:56.538 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99041' 00:16:56.538 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99041 00:16:56.538 [2024-12-07 02:50:07.565456] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:56.538 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99041 00:16:56.538 [2024-12-07 02:50:07.566407] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.798 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:16:56.798 00:16:56.798 real 0m3.839s 00:16:56.798 user 0m5.971s 00:16:56.798 sys 0m0.867s 00:16:56.798 ************************************ 00:16:56.798 END TEST raid_state_function_test_sb_md_interleaved 00:16:56.798 ************************************ 00:16:56.798 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:56.798 02:50:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.059 02:50:07 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:16:57.059 02:50:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:57.059 02:50:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:57.059 02:50:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:57.059 ************************************ 00:16:57.059 START TEST raid_superblock_test_md_interleaved 00:16:57.059 ************************************ 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:57.059 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:57.060 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:57.060 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=99271 00:16:57.060 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:57.060 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 99271 00:16:57.060 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99271 ']' 00:16:57.060 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.060 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.060 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.060 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.060 02:50:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.060 [2024-12-07 02:50:07.986884] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:16:57.060 [2024-12-07 02:50:07.987016] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99271 ] 00:16:57.320 [2024-12-07 02:50:08.144773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.320 [2024-12-07 02:50:08.189441] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.320 [2024-12-07 02:50:08.231603] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.320 [2024-12-07 02:50:08.231654] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.893 malloc1 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.893 [2024-12-07 02:50:08.833672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:57.893 [2024-12-07 02:50:08.833776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.893 [2024-12-07 02:50:08.833815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:57.893 [2024-12-07 02:50:08.833844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.893 [2024-12-07 02:50:08.835654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.893 [2024-12-07 02:50:08.835725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:57.893 pt1 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.893 malloc2 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.893 [2024-12-07 02:50:08.883480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:57.893 [2024-12-07 02:50:08.883804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.893 [2024-12-07 02:50:08.883870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:57.893 [2024-12-07 02:50:08.883907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.893 [2024-12-07 02:50:08.888072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.893 [2024-12-07 02:50:08.888141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:57.893 pt2 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.893 [2024-12-07 02:50:08.896391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.893 [2024-12-07 02:50:08.899240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.893 [2024-12-07 02:50:08.899518] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:16:57.893 [2024-12-07 02:50:08.899611] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:57.893 [2024-12-07 02:50:08.899793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:57.893 [2024-12-07 02:50:08.899951] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:16:57.893 [2024-12-07 02:50:08.900025] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:16:57.893 [2024-12-07 02:50:08.900253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.893 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.894 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.894 "name": "raid_bdev1", 00:16:57.894 "uuid": "b4d5e688-c65a-4644-b397-42e9fba90217", 00:16:57.894 "strip_size_kb": 0, 00:16:57.894 "state": "online", 00:16:57.894 "raid_level": "raid1", 00:16:57.894 "superblock": true, 00:16:57.894 "num_base_bdevs": 2, 00:16:57.894 "num_base_bdevs_discovered": 2, 00:16:57.894 "num_base_bdevs_operational": 2, 00:16:57.894 "base_bdevs_list": [ 00:16:57.894 { 00:16:57.894 "name": "pt1", 00:16:57.894 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:57.894 "is_configured": true, 00:16:57.894 "data_offset": 256, 00:16:57.894 "data_size": 7936 00:16:57.894 }, 00:16:57.894 { 00:16:57.894 "name": "pt2", 00:16:57.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:57.894 "is_configured": true, 00:16:57.894 "data_offset": 256, 00:16:57.894 "data_size": 7936 00:16:57.894 } 00:16:57.894 ] 00:16:57.894 }' 00:16:57.894 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.894 02:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.464 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:58.464 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:58.464 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.464 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.464 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.464 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.486 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.486 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.486 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.486 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.486 [2024-12-07 02:50:09.391816] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.486 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.486 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.486 "name": "raid_bdev1", 00:16:58.486 "aliases": [ 00:16:58.486 "b4d5e688-c65a-4644-b397-42e9fba90217" 00:16:58.486 ], 00:16:58.486 "product_name": "Raid Volume", 00:16:58.486 "block_size": 4128, 00:16:58.486 "num_blocks": 7936, 00:16:58.486 "uuid": "b4d5e688-c65a-4644-b397-42e9fba90217", 00:16:58.486 "md_size": 32, 00:16:58.486 "md_interleave": true, 00:16:58.486 "dif_type": 0, 00:16:58.486 "assigned_rate_limits": { 00:16:58.486 "rw_ios_per_sec": 0, 00:16:58.486 "rw_mbytes_per_sec": 0, 00:16:58.486 "r_mbytes_per_sec": 0, 00:16:58.486 "w_mbytes_per_sec": 0 00:16:58.486 }, 00:16:58.486 "claimed": false, 00:16:58.486 "zoned": false, 00:16:58.486 "supported_io_types": { 00:16:58.486 "read": true, 00:16:58.486 "write": true, 00:16:58.486 "unmap": false, 00:16:58.486 "flush": false, 00:16:58.486 "reset": true, 00:16:58.486 "nvme_admin": false, 00:16:58.486 "nvme_io": false, 00:16:58.486 "nvme_io_md": false, 00:16:58.486 "write_zeroes": true, 00:16:58.486 "zcopy": false, 00:16:58.486 "get_zone_info": false, 00:16:58.486 "zone_management": false, 00:16:58.486 "zone_append": false, 00:16:58.486 "compare": false, 00:16:58.486 "compare_and_write": false, 00:16:58.486 "abort": false, 00:16:58.486 "seek_hole": false, 00:16:58.486 "seek_data": false, 00:16:58.486 "copy": false, 00:16:58.486 "nvme_iov_md": false 00:16:58.486 }, 00:16:58.486 "memory_domains": [ 00:16:58.486 { 00:16:58.486 "dma_device_id": "system", 00:16:58.486 "dma_device_type": 1 00:16:58.486 }, 00:16:58.486 { 00:16:58.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.486 "dma_device_type": 2 00:16:58.486 }, 00:16:58.486 { 00:16:58.486 "dma_device_id": "system", 00:16:58.486 "dma_device_type": 1 00:16:58.486 }, 00:16:58.486 { 00:16:58.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.486 "dma_device_type": 2 00:16:58.486 } 00:16:58.486 ], 00:16:58.486 "driver_specific": { 00:16:58.486 "raid": { 00:16:58.486 "uuid": "b4d5e688-c65a-4644-b397-42e9fba90217", 00:16:58.486 "strip_size_kb": 0, 00:16:58.486 "state": "online", 00:16:58.486 "raid_level": "raid1", 00:16:58.486 "superblock": true, 00:16:58.486 "num_base_bdevs": 2, 00:16:58.486 "num_base_bdevs_discovered": 2, 00:16:58.486 "num_base_bdevs_operational": 2, 00:16:58.486 "base_bdevs_list": [ 00:16:58.486 { 00:16:58.486 "name": "pt1", 00:16:58.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:58.486 "is_configured": true, 00:16:58.486 "data_offset": 256, 00:16:58.486 "data_size": 7936 00:16:58.486 }, 00:16:58.486 { 00:16:58.486 "name": "pt2", 00:16:58.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:58.486 "is_configured": true, 00:16:58.486 "data_offset": 256, 00:16:58.486 "data_size": 7936 00:16:58.486 } 00:16:58.486 ] 00:16:58.486 } 00:16:58.486 } 00:16:58.486 }' 00:16:58.487 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.487 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:58.487 pt2' 00:16:58.487 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.487 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:58.487 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.487 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.487 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:58.487 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.487 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.487 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.487 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:58.487 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:58.487 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:58.747 [2024-12-07 02:50:09.591320] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b4d5e688-c65a-4644-b397-42e9fba90217 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z b4d5e688-c65a-4644-b397-42e9fba90217 ']' 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.747 [2024-12-07 02:50:09.639033] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.747 [2024-12-07 02:50:09.639097] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.747 [2024-12-07 02:50:09.639179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.747 [2024-12-07 02:50:09.639261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.747 [2024-12-07 02:50:09.639308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.747 [2024-12-07 02:50:09.770816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:58.747 [2024-12-07 02:50:09.772619] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:58.747 [2024-12-07 02:50:09.772717] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:58.747 [2024-12-07 02:50:09.772804] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:58.747 [2024-12-07 02:50:09.772882] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.747 [2024-12-07 02:50:09.772920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state configuring 00:16:58.747 request: 00:16:58.747 { 00:16:58.747 "name": "raid_bdev1", 00:16:58.747 "raid_level": "raid1", 00:16:58.747 "base_bdevs": [ 00:16:58.747 "malloc1", 00:16:58.747 "malloc2" 00:16:58.747 ], 00:16:58.747 "superblock": false, 00:16:58.747 "method": "bdev_raid_create", 00:16:58.747 "req_id": 1 00:16:58.747 } 00:16:58.747 Got JSON-RPC error response 00:16:58.747 response: 00:16:58.747 { 00:16:58.747 "code": -17, 00:16:58.747 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:58.747 } 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:58.747 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.008 [2024-12-07 02:50:09.830671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:59.008 [2024-12-07 02:50:09.830750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.008 [2024-12-07 02:50:09.830782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:59.008 [2024-12-07 02:50:09.830808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.008 [2024-12-07 02:50:09.832604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.008 [2024-12-07 02:50:09.832636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:59.008 [2024-12-07 02:50:09.832676] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:59.008 [2024-12-07 02:50:09.832715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:59.008 pt1 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.008 "name": "raid_bdev1", 00:16:59.008 "uuid": "b4d5e688-c65a-4644-b397-42e9fba90217", 00:16:59.008 "strip_size_kb": 0, 00:16:59.008 "state": "configuring", 00:16:59.008 "raid_level": "raid1", 00:16:59.008 "superblock": true, 00:16:59.008 "num_base_bdevs": 2, 00:16:59.008 "num_base_bdevs_discovered": 1, 00:16:59.008 "num_base_bdevs_operational": 2, 00:16:59.008 "base_bdevs_list": [ 00:16:59.008 { 00:16:59.008 "name": "pt1", 00:16:59.008 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.008 "is_configured": true, 00:16:59.008 "data_offset": 256, 00:16:59.008 "data_size": 7936 00:16:59.008 }, 00:16:59.008 { 00:16:59.008 "name": null, 00:16:59.008 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.008 "is_configured": false, 00:16:59.008 "data_offset": 256, 00:16:59.008 "data_size": 7936 00:16:59.008 } 00:16:59.008 ] 00:16:59.008 }' 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.008 02:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.300 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:59.300 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:59.300 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.300 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.300 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.300 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.300 [2024-12-07 02:50:10.242012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.300 [2024-12-07 02:50:10.242102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.300 [2024-12-07 02:50:10.242139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:59.300 [2024-12-07 02:50:10.242166] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.301 [2024-12-07 02:50:10.242301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.301 [2024-12-07 02:50:10.242342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.301 [2024-12-07 02:50:10.242401] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:59.301 [2024-12-07 02:50:10.242440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.301 [2024-12-07 02:50:10.242525] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006980 00:16:59.301 [2024-12-07 02:50:10.242561] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:59.301 [2024-12-07 02:50:10.242666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:59.301 [2024-12-07 02:50:10.242752] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006980 00:16:59.301 [2024-12-07 02:50:10.242789] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006980 00:16:59.301 [2024-12-07 02:50:10.242874] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.301 pt2 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.301 "name": "raid_bdev1", 00:16:59.301 "uuid": "b4d5e688-c65a-4644-b397-42e9fba90217", 00:16:59.301 "strip_size_kb": 0, 00:16:59.301 "state": "online", 00:16:59.301 "raid_level": "raid1", 00:16:59.301 "superblock": true, 00:16:59.301 "num_base_bdevs": 2, 00:16:59.301 "num_base_bdevs_discovered": 2, 00:16:59.301 "num_base_bdevs_operational": 2, 00:16:59.301 "base_bdevs_list": [ 00:16:59.301 { 00:16:59.301 "name": "pt1", 00:16:59.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.301 "is_configured": true, 00:16:59.301 "data_offset": 256, 00:16:59.301 "data_size": 7936 00:16:59.301 }, 00:16:59.301 { 00:16:59.301 "name": "pt2", 00:16:59.301 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.301 "is_configured": true, 00:16:59.301 "data_offset": 256, 00:16:59.301 "data_size": 7936 00:16:59.301 } 00:16:59.301 ] 00:16:59.301 }' 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.301 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:59.905 [2024-12-07 02:50:10.697464] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:59.905 "name": "raid_bdev1", 00:16:59.905 "aliases": [ 00:16:59.905 "b4d5e688-c65a-4644-b397-42e9fba90217" 00:16:59.905 ], 00:16:59.905 "product_name": "Raid Volume", 00:16:59.905 "block_size": 4128, 00:16:59.905 "num_blocks": 7936, 00:16:59.905 "uuid": "b4d5e688-c65a-4644-b397-42e9fba90217", 00:16:59.905 "md_size": 32, 00:16:59.905 "md_interleave": true, 00:16:59.905 "dif_type": 0, 00:16:59.905 "assigned_rate_limits": { 00:16:59.905 "rw_ios_per_sec": 0, 00:16:59.905 "rw_mbytes_per_sec": 0, 00:16:59.905 "r_mbytes_per_sec": 0, 00:16:59.905 "w_mbytes_per_sec": 0 00:16:59.905 }, 00:16:59.905 "claimed": false, 00:16:59.905 "zoned": false, 00:16:59.905 "supported_io_types": { 00:16:59.905 "read": true, 00:16:59.905 "write": true, 00:16:59.905 "unmap": false, 00:16:59.905 "flush": false, 00:16:59.905 "reset": true, 00:16:59.905 "nvme_admin": false, 00:16:59.905 "nvme_io": false, 00:16:59.905 "nvme_io_md": false, 00:16:59.905 "write_zeroes": true, 00:16:59.905 "zcopy": false, 00:16:59.905 "get_zone_info": false, 00:16:59.905 "zone_management": false, 00:16:59.905 "zone_append": false, 00:16:59.905 "compare": false, 00:16:59.905 "compare_and_write": false, 00:16:59.905 "abort": false, 00:16:59.905 "seek_hole": false, 00:16:59.905 "seek_data": false, 00:16:59.905 "copy": false, 00:16:59.905 "nvme_iov_md": false 00:16:59.905 }, 00:16:59.905 "memory_domains": [ 00:16:59.905 { 00:16:59.905 "dma_device_id": "system", 00:16:59.905 "dma_device_type": 1 00:16:59.905 }, 00:16:59.905 { 00:16:59.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.905 "dma_device_type": 2 00:16:59.905 }, 00:16:59.905 { 00:16:59.905 "dma_device_id": "system", 00:16:59.905 "dma_device_type": 1 00:16:59.905 }, 00:16:59.905 { 00:16:59.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.905 "dma_device_type": 2 00:16:59.905 } 00:16:59.905 ], 00:16:59.905 "driver_specific": { 00:16:59.905 "raid": { 00:16:59.905 "uuid": "b4d5e688-c65a-4644-b397-42e9fba90217", 00:16:59.905 "strip_size_kb": 0, 00:16:59.905 "state": "online", 00:16:59.905 "raid_level": "raid1", 00:16:59.905 "superblock": true, 00:16:59.905 "num_base_bdevs": 2, 00:16:59.905 "num_base_bdevs_discovered": 2, 00:16:59.905 "num_base_bdevs_operational": 2, 00:16:59.905 "base_bdevs_list": [ 00:16:59.905 { 00:16:59.905 "name": "pt1", 00:16:59.905 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.905 "is_configured": true, 00:16:59.905 "data_offset": 256, 00:16:59.905 "data_size": 7936 00:16:59.905 }, 00:16:59.905 { 00:16:59.905 "name": "pt2", 00:16:59.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.905 "is_configured": true, 00:16:59.905 "data_offset": 256, 00:16:59.905 "data_size": 7936 00:16:59.905 } 00:16:59.905 ] 00:16:59.905 } 00:16:59.905 } 00:16:59.905 }' 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:59.905 pt2' 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.905 [2024-12-07 02:50:10.897099] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' b4d5e688-c65a-4644-b397-42e9fba90217 '!=' b4d5e688-c65a-4644-b397-42e9fba90217 ']' 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.905 [2024-12-07 02:50:10.936826] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.905 "name": "raid_bdev1", 00:16:59.905 "uuid": "b4d5e688-c65a-4644-b397-42e9fba90217", 00:16:59.905 "strip_size_kb": 0, 00:16:59.905 "state": "online", 00:16:59.905 "raid_level": "raid1", 00:16:59.905 "superblock": true, 00:16:59.905 "num_base_bdevs": 2, 00:16:59.905 "num_base_bdevs_discovered": 1, 00:16:59.905 "num_base_bdevs_operational": 1, 00:16:59.905 "base_bdevs_list": [ 00:16:59.905 { 00:16:59.905 "name": null, 00:16:59.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.905 "is_configured": false, 00:16:59.905 "data_offset": 0, 00:16:59.905 "data_size": 7936 00:16:59.905 }, 00:16:59.905 { 00:16:59.905 "name": "pt2", 00:16:59.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.905 "is_configured": true, 00:16:59.905 "data_offset": 256, 00:16:59.905 "data_size": 7936 00:16:59.905 } 00:16:59.905 ] 00:16:59.905 }' 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.905 02:50:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.531 [2024-12-07 02:50:11.344144] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.531 [2024-12-07 02:50:11.344169] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.531 [2024-12-07 02:50:11.344225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.531 [2024-12-07 02:50:11.344265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.531 [2024-12-07 02:50:11.344273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006980 name raid_bdev1, state offline 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.531 [2024-12-07 02:50:11.412131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:00.531 [2024-12-07 02:50:11.412215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.531 [2024-12-07 02:50:11.412248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:00.531 [2024-12-07 02:50:11.412280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.531 [2024-12-07 02:50:11.414163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.531 [2024-12-07 02:50:11.414197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:00.531 [2024-12-07 02:50:11.414242] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:00.531 [2024-12-07 02:50:11.414268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:00.531 [2024-12-07 02:50:11.414319] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006d00 00:17:00.531 [2024-12-07 02:50:11.414326] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:00.531 [2024-12-07 02:50:11.414404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:00.531 [2024-12-07 02:50:11.414458] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006d00 00:17:00.531 [2024-12-07 02:50:11.414466] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006d00 00:17:00.531 [2024-12-07 02:50:11.414513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.531 pt2 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.531 "name": "raid_bdev1", 00:17:00.531 "uuid": "b4d5e688-c65a-4644-b397-42e9fba90217", 00:17:00.531 "strip_size_kb": 0, 00:17:00.531 "state": "online", 00:17:00.531 "raid_level": "raid1", 00:17:00.531 "superblock": true, 00:17:00.531 "num_base_bdevs": 2, 00:17:00.531 "num_base_bdevs_discovered": 1, 00:17:00.531 "num_base_bdevs_operational": 1, 00:17:00.531 "base_bdevs_list": [ 00:17:00.531 { 00:17:00.531 "name": null, 00:17:00.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.531 "is_configured": false, 00:17:00.531 "data_offset": 256, 00:17:00.531 "data_size": 7936 00:17:00.531 }, 00:17:00.531 { 00:17:00.531 "name": "pt2", 00:17:00.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.531 "is_configured": true, 00:17:00.531 "data_offset": 256, 00:17:00.531 "data_size": 7936 00:17:00.531 } 00:17:00.531 ] 00:17:00.531 }' 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.531 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.790 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:00.790 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.790 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.790 [2024-12-07 02:50:11.811693] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.790 [2024-12-07 02:50:11.811751] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.790 [2024-12-07 02:50:11.811801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.790 [2024-12-07 02:50:11.811835] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.790 [2024-12-07 02:50:11.811845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006d00 name raid_bdev1, state offline 00:17:00.790 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.790 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.790 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.790 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:00.790 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:00.790 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.050 [2024-12-07 02:50:11.871666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:01.050 [2024-12-07 02:50:11.871747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.050 [2024-12-07 02:50:11.871780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:01.050 [2024-12-07 02:50:11.871812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.050 [2024-12-07 02:50:11.873638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.050 [2024-12-07 02:50:11.873707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:01.050 [2024-12-07 02:50:11.873764] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:01.050 [2024-12-07 02:50:11.873814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:01.050 [2024-12-07 02:50:11.873907] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:01.050 [2024-12-07 02:50:11.873946] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.050 [2024-12-07 02:50:11.873982] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007080 name raid_bdev1, state configuring 00:17:01.050 [2024-12-07 02:50:11.874062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.050 [2024-12-07 02:50:11.874146] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:01.050 [2024-12-07 02:50:11.874185] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:01.050 [2024-12-07 02:50:11.874259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:01.050 [2024-12-07 02:50:11.874339] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:01.050 [2024-12-07 02:50:11.874375] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:01.050 [2024-12-07 02:50:11.874468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.050 pt1 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.050 "name": "raid_bdev1", 00:17:01.050 "uuid": "b4d5e688-c65a-4644-b397-42e9fba90217", 00:17:01.050 "strip_size_kb": 0, 00:17:01.050 "state": "online", 00:17:01.050 "raid_level": "raid1", 00:17:01.050 "superblock": true, 00:17:01.050 "num_base_bdevs": 2, 00:17:01.050 "num_base_bdevs_discovered": 1, 00:17:01.050 "num_base_bdevs_operational": 1, 00:17:01.050 "base_bdevs_list": [ 00:17:01.050 { 00:17:01.050 "name": null, 00:17:01.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.050 "is_configured": false, 00:17:01.050 "data_offset": 256, 00:17:01.050 "data_size": 7936 00:17:01.050 }, 00:17:01.050 { 00:17:01.050 "name": "pt2", 00:17:01.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.050 "is_configured": true, 00:17:01.050 "data_offset": 256, 00:17:01.050 "data_size": 7936 00:17:01.050 } 00:17:01.050 ] 00:17:01.050 }' 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.050 02:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.309 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:01.309 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:01.309 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.309 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.309 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:01.569 [2024-12-07 02:50:12.394995] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' b4d5e688-c65a-4644-b397-42e9fba90217 '!=' b4d5e688-c65a-4644-b397-42e9fba90217 ']' 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 99271 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99271 ']' 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99271 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99271 00:17:01.569 killing process with pid 99271 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99271' 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 99271 00:17:01.569 [2024-12-07 02:50:12.477948] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:01.569 [2024-12-07 02:50:12.478014] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.569 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 99271 00:17:01.569 [2024-12-07 02:50:12.478058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.569 [2024-12-07 02:50:12.478066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:01.569 [2024-12-07 02:50:12.501902] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.829 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:17:01.829 00:17:01.829 real 0m4.851s 00:17:01.829 user 0m7.859s 00:17:01.829 sys 0m1.082s 00:17:01.829 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.829 ************************************ 00:17:01.829 END TEST raid_superblock_test_md_interleaved 00:17:01.829 ************************************ 00:17:01.829 02:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:01.829 02:50:12 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:01.829 02:50:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:01.829 02:50:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.829 02:50:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:01.829 ************************************ 00:17:01.829 START TEST raid_rebuild_test_sb_md_interleaved 00:17:01.829 ************************************ 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:01.829 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:01.830 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:01.830 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:01.830 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:01.830 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:01.830 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=99594 00:17:01.830 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 99594 00:17:01.830 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 99594 ']' 00:17:01.830 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.830 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.830 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:01.830 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.830 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.830 02:50:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.089 [2024-12-07 02:50:12.925996] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:02.089 [2024-12-07 02:50:12.926235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:02.089 Zero copy mechanism will not be used. 00:17:02.089 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99594 ] 00:17:02.089 [2024-12-07 02:50:13.092639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.089 [2024-12-07 02:50:13.138942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.349 [2024-12-07 02:50:13.181340] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.349 [2024-12-07 02:50:13.181453] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.920 BaseBdev1_malloc 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.920 [2024-12-07 02:50:13.747744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:02.920 [2024-12-07 02:50:13.747845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.920 [2024-12-07 02:50:13.747887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:02.920 [2024-12-07 02:50:13.747897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.920 [2024-12-07 02:50:13.749766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.920 [2024-12-07 02:50:13.749842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:02.920 BaseBdev1 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.920 BaseBdev2_malloc 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.920 [2024-12-07 02:50:13.777482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:02.920 [2024-12-07 02:50:13.777601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.920 [2024-12-07 02:50:13.777649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:02.920 [2024-12-07 02:50:13.777686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.920 [2024-12-07 02:50:13.779932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.920 [2024-12-07 02:50:13.780012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:02.920 BaseBdev2 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.920 spare_malloc 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.920 spare_delay 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.920 [2024-12-07 02:50:13.806069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:02.920 [2024-12-07 02:50:13.806118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.920 [2024-12-07 02:50:13.806138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:02.920 [2024-12-07 02:50:13.806147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.920 [2024-12-07 02:50:13.807941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.920 [2024-12-07 02:50:13.807975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:02.920 spare 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.920 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.920 [2024-12-07 02:50:13.814079] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.920 [2024-12-07 02:50:13.815816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.920 [2024-12-07 02:50:13.816002] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006280 00:17:02.920 [2024-12-07 02:50:13.816040] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:02.920 [2024-12-07 02:50:13.816151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:02.920 [2024-12-07 02:50:13.816285] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006280 00:17:02.921 [2024-12-07 02:50:13.816330] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006280 00:17:02.921 [2024-12-07 02:50:13.816438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.921 "name": "raid_bdev1", 00:17:02.921 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:02.921 "strip_size_kb": 0, 00:17:02.921 "state": "online", 00:17:02.921 "raid_level": "raid1", 00:17:02.921 "superblock": true, 00:17:02.921 "num_base_bdevs": 2, 00:17:02.921 "num_base_bdevs_discovered": 2, 00:17:02.921 "num_base_bdevs_operational": 2, 00:17:02.921 "base_bdevs_list": [ 00:17:02.921 { 00:17:02.921 "name": "BaseBdev1", 00:17:02.921 "uuid": "91e1d1e9-e50d-55b7-ad21-a2890fc1aea1", 00:17:02.921 "is_configured": true, 00:17:02.921 "data_offset": 256, 00:17:02.921 "data_size": 7936 00:17:02.921 }, 00:17:02.921 { 00:17:02.921 "name": "BaseBdev2", 00:17:02.921 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:02.921 "is_configured": true, 00:17:02.921 "data_offset": 256, 00:17:02.921 "data_size": 7936 00:17:02.921 } 00:17:02.921 ] 00:17:02.921 }' 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.921 02:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.491 [2024-12-07 02:50:14.277550] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.491 [2024-12-07 02:50:14.357136] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.491 "name": "raid_bdev1", 00:17:03.491 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:03.491 "strip_size_kb": 0, 00:17:03.491 "state": "online", 00:17:03.491 "raid_level": "raid1", 00:17:03.491 "superblock": true, 00:17:03.491 "num_base_bdevs": 2, 00:17:03.491 "num_base_bdevs_discovered": 1, 00:17:03.491 "num_base_bdevs_operational": 1, 00:17:03.491 "base_bdevs_list": [ 00:17:03.491 { 00:17:03.491 "name": null, 00:17:03.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.491 "is_configured": false, 00:17:03.491 "data_offset": 0, 00:17:03.491 "data_size": 7936 00:17:03.491 }, 00:17:03.491 { 00:17:03.491 "name": "BaseBdev2", 00:17:03.491 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:03.491 "is_configured": true, 00:17:03.491 "data_offset": 256, 00:17:03.491 "data_size": 7936 00:17:03.491 } 00:17:03.491 ] 00:17:03.491 }' 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.491 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.751 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:03.751 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.751 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:03.751 [2024-12-07 02:50:14.812399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.751 [2024-12-07 02:50:14.815360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:03.751 [2024-12-07 02:50:14.817231] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:03.751 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.751 02:50:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.133 "name": "raid_bdev1", 00:17:05.133 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:05.133 "strip_size_kb": 0, 00:17:05.133 "state": "online", 00:17:05.133 "raid_level": "raid1", 00:17:05.133 "superblock": true, 00:17:05.133 "num_base_bdevs": 2, 00:17:05.133 "num_base_bdevs_discovered": 2, 00:17:05.133 "num_base_bdevs_operational": 2, 00:17:05.133 "process": { 00:17:05.133 "type": "rebuild", 00:17:05.133 "target": "spare", 00:17:05.133 "progress": { 00:17:05.133 "blocks": 2560, 00:17:05.133 "percent": 32 00:17:05.133 } 00:17:05.133 }, 00:17:05.133 "base_bdevs_list": [ 00:17:05.133 { 00:17:05.133 "name": "spare", 00:17:05.133 "uuid": "8a7c5d7e-363d-5934-8d2e-ad824b9ce7df", 00:17:05.133 "is_configured": true, 00:17:05.133 "data_offset": 256, 00:17:05.133 "data_size": 7936 00:17:05.133 }, 00:17:05.133 { 00:17:05.133 "name": "BaseBdev2", 00:17:05.133 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:05.133 "is_configured": true, 00:17:05.133 "data_offset": 256, 00:17:05.133 "data_size": 7936 00:17:05.133 } 00:17:05.133 ] 00:17:05.133 }' 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.133 02:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.133 [2024-12-07 02:50:15.928146] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:05.133 [2024-12-07 02:50:16.021991] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:05.133 [2024-12-07 02:50:16.022096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.133 [2024-12-07 02:50:16.022131] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:05.133 [2024-12-07 02:50:16.022151] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.133 "name": "raid_bdev1", 00:17:05.133 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:05.133 "strip_size_kb": 0, 00:17:05.133 "state": "online", 00:17:05.133 "raid_level": "raid1", 00:17:05.133 "superblock": true, 00:17:05.133 "num_base_bdevs": 2, 00:17:05.133 "num_base_bdevs_discovered": 1, 00:17:05.133 "num_base_bdevs_operational": 1, 00:17:05.133 "base_bdevs_list": [ 00:17:05.133 { 00:17:05.133 "name": null, 00:17:05.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.133 "is_configured": false, 00:17:05.133 "data_offset": 0, 00:17:05.133 "data_size": 7936 00:17:05.133 }, 00:17:05.133 { 00:17:05.133 "name": "BaseBdev2", 00:17:05.133 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:05.133 "is_configured": true, 00:17:05.133 "data_offset": 256, 00:17:05.133 "data_size": 7936 00:17:05.133 } 00:17:05.133 ] 00:17:05.133 }' 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.133 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.701 "name": "raid_bdev1", 00:17:05.701 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:05.701 "strip_size_kb": 0, 00:17:05.701 "state": "online", 00:17:05.701 "raid_level": "raid1", 00:17:05.701 "superblock": true, 00:17:05.701 "num_base_bdevs": 2, 00:17:05.701 "num_base_bdevs_discovered": 1, 00:17:05.701 "num_base_bdevs_operational": 1, 00:17:05.701 "base_bdevs_list": [ 00:17:05.701 { 00:17:05.701 "name": null, 00:17:05.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.701 "is_configured": false, 00:17:05.701 "data_offset": 0, 00:17:05.701 "data_size": 7936 00:17:05.701 }, 00:17:05.701 { 00:17:05.701 "name": "BaseBdev2", 00:17:05.701 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:05.701 "is_configured": true, 00:17:05.701 "data_offset": 256, 00:17:05.701 "data_size": 7936 00:17:05.701 } 00:17:05.701 ] 00:17:05.701 }' 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:05.701 [2024-12-07 02:50:16.652821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:05.701 [2024-12-07 02:50:16.655273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:05.701 [2024-12-07 02:50:16.657013] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.701 02:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:06.637 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.637 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.637 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.637 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.637 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.637 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.637 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.637 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.637 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.637 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.896 "name": "raid_bdev1", 00:17:06.896 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:06.896 "strip_size_kb": 0, 00:17:06.896 "state": "online", 00:17:06.896 "raid_level": "raid1", 00:17:06.896 "superblock": true, 00:17:06.896 "num_base_bdevs": 2, 00:17:06.896 "num_base_bdevs_discovered": 2, 00:17:06.896 "num_base_bdevs_operational": 2, 00:17:06.896 "process": { 00:17:06.896 "type": "rebuild", 00:17:06.896 "target": "spare", 00:17:06.896 "progress": { 00:17:06.896 "blocks": 2560, 00:17:06.896 "percent": 32 00:17:06.896 } 00:17:06.896 }, 00:17:06.896 "base_bdevs_list": [ 00:17:06.896 { 00:17:06.896 "name": "spare", 00:17:06.896 "uuid": "8a7c5d7e-363d-5934-8d2e-ad824b9ce7df", 00:17:06.896 "is_configured": true, 00:17:06.896 "data_offset": 256, 00:17:06.896 "data_size": 7936 00:17:06.896 }, 00:17:06.896 { 00:17:06.896 "name": "BaseBdev2", 00:17:06.896 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:06.896 "is_configured": true, 00:17:06.896 "data_offset": 256, 00:17:06.896 "data_size": 7936 00:17:06.896 } 00:17:06.896 ] 00:17:06.896 }' 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:06.896 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=630 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.896 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.897 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.897 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.897 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.897 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.897 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.897 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:06.897 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.897 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.897 "name": "raid_bdev1", 00:17:06.897 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:06.897 "strip_size_kb": 0, 00:17:06.897 "state": "online", 00:17:06.897 "raid_level": "raid1", 00:17:06.897 "superblock": true, 00:17:06.897 "num_base_bdevs": 2, 00:17:06.897 "num_base_bdevs_discovered": 2, 00:17:06.897 "num_base_bdevs_operational": 2, 00:17:06.897 "process": { 00:17:06.897 "type": "rebuild", 00:17:06.897 "target": "spare", 00:17:06.897 "progress": { 00:17:06.897 "blocks": 2816, 00:17:06.897 "percent": 35 00:17:06.897 } 00:17:06.897 }, 00:17:06.897 "base_bdevs_list": [ 00:17:06.897 { 00:17:06.897 "name": "spare", 00:17:06.897 "uuid": "8a7c5d7e-363d-5934-8d2e-ad824b9ce7df", 00:17:06.897 "is_configured": true, 00:17:06.897 "data_offset": 256, 00:17:06.897 "data_size": 7936 00:17:06.897 }, 00:17:06.897 { 00:17:06.897 "name": "BaseBdev2", 00:17:06.897 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:06.897 "is_configured": true, 00:17:06.897 "data_offset": 256, 00:17:06.897 "data_size": 7936 00:17:06.897 } 00:17:06.897 ] 00:17:06.897 }' 00:17:06.897 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.897 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:06.897 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.897 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:06.897 02:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:07.835 02:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:07.835 02:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.835 02:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.835 02:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.835 02:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.835 02:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.094 02:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.094 02:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.095 02:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:08.095 02:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.095 02:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.095 02:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.095 "name": "raid_bdev1", 00:17:08.095 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:08.095 "strip_size_kb": 0, 00:17:08.095 "state": "online", 00:17:08.095 "raid_level": "raid1", 00:17:08.095 "superblock": true, 00:17:08.095 "num_base_bdevs": 2, 00:17:08.095 "num_base_bdevs_discovered": 2, 00:17:08.095 "num_base_bdevs_operational": 2, 00:17:08.095 "process": { 00:17:08.095 "type": "rebuild", 00:17:08.095 "target": "spare", 00:17:08.095 "progress": { 00:17:08.095 "blocks": 5632, 00:17:08.095 "percent": 70 00:17:08.095 } 00:17:08.095 }, 00:17:08.095 "base_bdevs_list": [ 00:17:08.095 { 00:17:08.095 "name": "spare", 00:17:08.095 "uuid": "8a7c5d7e-363d-5934-8d2e-ad824b9ce7df", 00:17:08.095 "is_configured": true, 00:17:08.095 "data_offset": 256, 00:17:08.095 "data_size": 7936 00:17:08.095 }, 00:17:08.095 { 00:17:08.095 "name": "BaseBdev2", 00:17:08.095 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:08.095 "is_configured": true, 00:17:08.095 "data_offset": 256, 00:17:08.095 "data_size": 7936 00:17:08.095 } 00:17:08.095 ] 00:17:08.095 }' 00:17:08.095 02:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.095 02:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.095 02:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.095 02:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.095 02:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.033 [2024-12-07 02:50:19.767531] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:09.033 [2024-12-07 02:50:19.767614] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:09.033 [2024-12-07 02:50:19.767703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.033 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.033 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.033 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.033 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.033 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.033 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.033 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.033 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.033 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.033 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.033 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.033 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.033 "name": "raid_bdev1", 00:17:09.033 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:09.033 "strip_size_kb": 0, 00:17:09.033 "state": "online", 00:17:09.033 "raid_level": "raid1", 00:17:09.033 "superblock": true, 00:17:09.033 "num_base_bdevs": 2, 00:17:09.033 "num_base_bdevs_discovered": 2, 00:17:09.033 "num_base_bdevs_operational": 2, 00:17:09.033 "base_bdevs_list": [ 00:17:09.033 { 00:17:09.033 "name": "spare", 00:17:09.033 "uuid": "8a7c5d7e-363d-5934-8d2e-ad824b9ce7df", 00:17:09.033 "is_configured": true, 00:17:09.033 "data_offset": 256, 00:17:09.033 "data_size": 7936 00:17:09.033 }, 00:17:09.033 { 00:17:09.033 "name": "BaseBdev2", 00:17:09.033 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:09.033 "is_configured": true, 00:17:09.033 "data_offset": 256, 00:17:09.033 "data_size": 7936 00:17:09.033 } 00:17:09.033 ] 00:17:09.033 }' 00:17:09.033 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.293 "name": "raid_bdev1", 00:17:09.293 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:09.293 "strip_size_kb": 0, 00:17:09.293 "state": "online", 00:17:09.293 "raid_level": "raid1", 00:17:09.293 "superblock": true, 00:17:09.293 "num_base_bdevs": 2, 00:17:09.293 "num_base_bdevs_discovered": 2, 00:17:09.293 "num_base_bdevs_operational": 2, 00:17:09.293 "base_bdevs_list": [ 00:17:09.293 { 00:17:09.293 "name": "spare", 00:17:09.293 "uuid": "8a7c5d7e-363d-5934-8d2e-ad824b9ce7df", 00:17:09.293 "is_configured": true, 00:17:09.293 "data_offset": 256, 00:17:09.293 "data_size": 7936 00:17:09.293 }, 00:17:09.293 { 00:17:09.293 "name": "BaseBdev2", 00:17:09.293 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:09.293 "is_configured": true, 00:17:09.293 "data_offset": 256, 00:17:09.293 "data_size": 7936 00:17:09.293 } 00:17:09.293 ] 00:17:09.293 }' 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.293 "name": "raid_bdev1", 00:17:09.293 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:09.293 "strip_size_kb": 0, 00:17:09.293 "state": "online", 00:17:09.293 "raid_level": "raid1", 00:17:09.293 "superblock": true, 00:17:09.293 "num_base_bdevs": 2, 00:17:09.293 "num_base_bdevs_discovered": 2, 00:17:09.293 "num_base_bdevs_operational": 2, 00:17:09.293 "base_bdevs_list": [ 00:17:09.293 { 00:17:09.293 "name": "spare", 00:17:09.293 "uuid": "8a7c5d7e-363d-5934-8d2e-ad824b9ce7df", 00:17:09.293 "is_configured": true, 00:17:09.293 "data_offset": 256, 00:17:09.293 "data_size": 7936 00:17:09.293 }, 00:17:09.293 { 00:17:09.293 "name": "BaseBdev2", 00:17:09.293 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:09.293 "is_configured": true, 00:17:09.293 "data_offset": 256, 00:17:09.293 "data_size": 7936 00:17:09.293 } 00:17:09.293 ] 00:17:09.293 }' 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.293 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.863 [2024-12-07 02:50:20.661322] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.863 [2024-12-07 02:50:20.661351] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.863 [2024-12-07 02:50:20.661423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.863 [2024-12-07 02:50:20.661493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.863 [2024-12-07 02:50:20.661508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006280 name raid_bdev1, state offline 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.863 [2024-12-07 02:50:20.725212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:09.863 [2024-12-07 02:50:20.725265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.863 [2024-12-07 02:50:20.725283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:09.863 [2024-12-07 02:50:20.725294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.863 [2024-12-07 02:50:20.727232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.863 [2024-12-07 02:50:20.727271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:09.863 [2024-12-07 02:50:20.727321] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:09.863 [2024-12-07 02:50:20.727372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.863 [2024-12-07 02:50:20.727474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.863 spare 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.863 [2024-12-07 02:50:20.827362] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000006600 00:17:09.863 [2024-12-07 02:50:20.827389] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:09.863 [2024-12-07 02:50:20.827476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:09.863 [2024-12-07 02:50:20.827553] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000006600 00:17:09.863 [2024-12-07 02:50:20.827569] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000006600 00:17:09.863 [2024-12-07 02:50:20.827641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.863 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.863 "name": "raid_bdev1", 00:17:09.863 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:09.863 "strip_size_kb": 0, 00:17:09.863 "state": "online", 00:17:09.863 "raid_level": "raid1", 00:17:09.863 "superblock": true, 00:17:09.863 "num_base_bdevs": 2, 00:17:09.863 "num_base_bdevs_discovered": 2, 00:17:09.864 "num_base_bdevs_operational": 2, 00:17:09.864 "base_bdevs_list": [ 00:17:09.864 { 00:17:09.864 "name": "spare", 00:17:09.864 "uuid": "8a7c5d7e-363d-5934-8d2e-ad824b9ce7df", 00:17:09.864 "is_configured": true, 00:17:09.864 "data_offset": 256, 00:17:09.864 "data_size": 7936 00:17:09.864 }, 00:17:09.864 { 00:17:09.864 "name": "BaseBdev2", 00:17:09.864 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:09.864 "is_configured": true, 00:17:09.864 "data_offset": 256, 00:17:09.864 "data_size": 7936 00:17:09.864 } 00:17:09.864 ] 00:17:09.864 }' 00:17:09.864 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.864 02:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.433 "name": "raid_bdev1", 00:17:10.433 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:10.433 "strip_size_kb": 0, 00:17:10.433 "state": "online", 00:17:10.433 "raid_level": "raid1", 00:17:10.433 "superblock": true, 00:17:10.433 "num_base_bdevs": 2, 00:17:10.433 "num_base_bdevs_discovered": 2, 00:17:10.433 "num_base_bdevs_operational": 2, 00:17:10.433 "base_bdevs_list": [ 00:17:10.433 { 00:17:10.433 "name": "spare", 00:17:10.433 "uuid": "8a7c5d7e-363d-5934-8d2e-ad824b9ce7df", 00:17:10.433 "is_configured": true, 00:17:10.433 "data_offset": 256, 00:17:10.433 "data_size": 7936 00:17:10.433 }, 00:17:10.433 { 00:17:10.433 "name": "BaseBdev2", 00:17:10.433 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:10.433 "is_configured": true, 00:17:10.433 "data_offset": 256, 00:17:10.433 "data_size": 7936 00:17:10.433 } 00:17:10.433 ] 00:17:10.433 }' 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.433 [2024-12-07 02:50:21.492003] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.433 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.692 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.692 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.692 "name": "raid_bdev1", 00:17:10.692 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:10.692 "strip_size_kb": 0, 00:17:10.692 "state": "online", 00:17:10.692 "raid_level": "raid1", 00:17:10.692 "superblock": true, 00:17:10.692 "num_base_bdevs": 2, 00:17:10.692 "num_base_bdevs_discovered": 1, 00:17:10.692 "num_base_bdevs_operational": 1, 00:17:10.692 "base_bdevs_list": [ 00:17:10.692 { 00:17:10.692 "name": null, 00:17:10.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.692 "is_configured": false, 00:17:10.692 "data_offset": 0, 00:17:10.692 "data_size": 7936 00:17:10.692 }, 00:17:10.692 { 00:17:10.692 "name": "BaseBdev2", 00:17:10.692 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:10.692 "is_configured": true, 00:17:10.692 "data_offset": 256, 00:17:10.692 "data_size": 7936 00:17:10.692 } 00:17:10.692 ] 00:17:10.692 }' 00:17:10.692 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.692 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.952 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:10.952 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.952 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:10.952 [2024-12-07 02:50:21.963200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:10.952 [2024-12-07 02:50:21.963357] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:10.952 [2024-12-07 02:50:21.963376] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:10.952 [2024-12-07 02:50:21.963418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:10.952 [2024-12-07 02:50:21.966202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:10.952 [2024-12-07 02:50:21.967969] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.952 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.952 02:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:12.334 02:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.334 02:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.334 02:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.334 02:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.334 02:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.334 02:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.334 02:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.334 02:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.334 02:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.334 02:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.334 "name": "raid_bdev1", 00:17:12.334 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:12.334 "strip_size_kb": 0, 00:17:12.334 "state": "online", 00:17:12.334 "raid_level": "raid1", 00:17:12.334 "superblock": true, 00:17:12.334 "num_base_bdevs": 2, 00:17:12.334 "num_base_bdevs_discovered": 2, 00:17:12.334 "num_base_bdevs_operational": 2, 00:17:12.334 "process": { 00:17:12.334 "type": "rebuild", 00:17:12.334 "target": "spare", 00:17:12.334 "progress": { 00:17:12.334 "blocks": 2560, 00:17:12.334 "percent": 32 00:17:12.334 } 00:17:12.334 }, 00:17:12.334 "base_bdevs_list": [ 00:17:12.334 { 00:17:12.334 "name": "spare", 00:17:12.334 "uuid": "8a7c5d7e-363d-5934-8d2e-ad824b9ce7df", 00:17:12.334 "is_configured": true, 00:17:12.334 "data_offset": 256, 00:17:12.334 "data_size": 7936 00:17:12.334 }, 00:17:12.334 { 00:17:12.334 "name": "BaseBdev2", 00:17:12.334 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:12.334 "is_configured": true, 00:17:12.334 "data_offset": 256, 00:17:12.334 "data_size": 7936 00:17:12.334 } 00:17:12.334 ] 00:17:12.334 }' 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.334 [2024-12-07 02:50:23.102671] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.334 [2024-12-07 02:50:23.171887] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:12.334 [2024-12-07 02:50:23.171942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.334 [2024-12-07 02:50:23.171958] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.334 [2024-12-07 02:50:23.171965] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.334 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:12.335 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.335 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.335 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.335 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.335 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.335 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.335 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.335 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.335 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.335 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.335 "name": "raid_bdev1", 00:17:12.335 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:12.335 "strip_size_kb": 0, 00:17:12.335 "state": "online", 00:17:12.335 "raid_level": "raid1", 00:17:12.335 "superblock": true, 00:17:12.335 "num_base_bdevs": 2, 00:17:12.335 "num_base_bdevs_discovered": 1, 00:17:12.335 "num_base_bdevs_operational": 1, 00:17:12.335 "base_bdevs_list": [ 00:17:12.335 { 00:17:12.335 "name": null, 00:17:12.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.335 "is_configured": false, 00:17:12.335 "data_offset": 0, 00:17:12.335 "data_size": 7936 00:17:12.335 }, 00:17:12.335 { 00:17:12.335 "name": "BaseBdev2", 00:17:12.335 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:12.335 "is_configured": true, 00:17:12.335 "data_offset": 256, 00:17:12.335 "data_size": 7936 00:17:12.335 } 00:17:12.335 ] 00:17:12.335 }' 00:17:12.335 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.335 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.594 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:12.595 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.595 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:12.595 [2024-12-07 02:50:23.614508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:12.595 [2024-12-07 02:50:23.614562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.595 [2024-12-07 02:50:23.614597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:12.595 [2024-12-07 02:50:23.614607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.595 [2024-12-07 02:50:23.614789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.595 [2024-12-07 02:50:23.614810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:12.595 [2024-12-07 02:50:23.614860] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:12.595 [2024-12-07 02:50:23.614875] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:12.595 [2024-12-07 02:50:23.614885] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:12.595 [2024-12-07 02:50:23.614906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:12.595 [2024-12-07 02:50:23.617240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:12.595 [2024-12-07 02:50:23.619013] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:12.595 spare 00:17:12.595 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.595 02:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.977 "name": "raid_bdev1", 00:17:13.977 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:13.977 "strip_size_kb": 0, 00:17:13.977 "state": "online", 00:17:13.977 "raid_level": "raid1", 00:17:13.977 "superblock": true, 00:17:13.977 "num_base_bdevs": 2, 00:17:13.977 "num_base_bdevs_discovered": 2, 00:17:13.977 "num_base_bdevs_operational": 2, 00:17:13.977 "process": { 00:17:13.977 "type": "rebuild", 00:17:13.977 "target": "spare", 00:17:13.977 "progress": { 00:17:13.977 "blocks": 2560, 00:17:13.977 "percent": 32 00:17:13.977 } 00:17:13.977 }, 00:17:13.977 "base_bdevs_list": [ 00:17:13.977 { 00:17:13.977 "name": "spare", 00:17:13.977 "uuid": "8a7c5d7e-363d-5934-8d2e-ad824b9ce7df", 00:17:13.977 "is_configured": true, 00:17:13.977 "data_offset": 256, 00:17:13.977 "data_size": 7936 00:17:13.977 }, 00:17:13.977 { 00:17:13.977 "name": "BaseBdev2", 00:17:13.977 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:13.977 "is_configured": true, 00:17:13.977 "data_offset": 256, 00:17:13.977 "data_size": 7936 00:17:13.977 } 00:17:13.977 ] 00:17:13.977 }' 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.977 [2024-12-07 02:50:24.781706] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.977 [2024-12-07 02:50:24.822924] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:13.977 [2024-12-07 02:50:24.823001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.977 [2024-12-07 02:50:24.823017] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:13.977 [2024-12-07 02:50:24.823025] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.977 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.978 "name": "raid_bdev1", 00:17:13.978 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:13.978 "strip_size_kb": 0, 00:17:13.978 "state": "online", 00:17:13.978 "raid_level": "raid1", 00:17:13.978 "superblock": true, 00:17:13.978 "num_base_bdevs": 2, 00:17:13.978 "num_base_bdevs_discovered": 1, 00:17:13.978 "num_base_bdevs_operational": 1, 00:17:13.978 "base_bdevs_list": [ 00:17:13.978 { 00:17:13.978 "name": null, 00:17:13.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.978 "is_configured": false, 00:17:13.978 "data_offset": 0, 00:17:13.978 "data_size": 7936 00:17:13.978 }, 00:17:13.978 { 00:17:13.978 "name": "BaseBdev2", 00:17:13.978 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:13.978 "is_configured": true, 00:17:13.978 "data_offset": 256, 00:17:13.978 "data_size": 7936 00:17:13.978 } 00:17:13.978 ] 00:17:13.978 }' 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.978 02:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.236 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.236 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.236 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.236 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.236 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.236 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.236 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.236 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.236 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.236 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.236 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.236 "name": "raid_bdev1", 00:17:14.236 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:14.236 "strip_size_kb": 0, 00:17:14.236 "state": "online", 00:17:14.236 "raid_level": "raid1", 00:17:14.236 "superblock": true, 00:17:14.236 "num_base_bdevs": 2, 00:17:14.236 "num_base_bdevs_discovered": 1, 00:17:14.236 "num_base_bdevs_operational": 1, 00:17:14.236 "base_bdevs_list": [ 00:17:14.236 { 00:17:14.236 "name": null, 00:17:14.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.236 "is_configured": false, 00:17:14.236 "data_offset": 0, 00:17:14.236 "data_size": 7936 00:17:14.236 }, 00:17:14.236 { 00:17:14.236 "name": "BaseBdev2", 00:17:14.236 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:14.236 "is_configured": true, 00:17:14.236 "data_offset": 256, 00:17:14.236 "data_size": 7936 00:17:14.236 } 00:17:14.236 ] 00:17:14.236 }' 00:17:14.236 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.495 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.495 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.495 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.495 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:14.495 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.495 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.495 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.495 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:14.495 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.495 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:14.495 [2024-12-07 02:50:25.409541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:14.495 [2024-12-07 02:50:25.409605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.495 [2024-12-07 02:50:25.409622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:14.495 [2024-12-07 02:50:25.409632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.495 [2024-12-07 02:50:25.409779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.495 [2024-12-07 02:50:25.409801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:14.495 [2024-12-07 02:50:25.409844] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:14.495 [2024-12-07 02:50:25.409872] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:14.495 [2024-12-07 02:50:25.409885] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:14.495 [2024-12-07 02:50:25.409900] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:14.495 BaseBdev1 00:17:14.495 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.495 02:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.432 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.432 "name": "raid_bdev1", 00:17:15.432 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:15.432 "strip_size_kb": 0, 00:17:15.432 "state": "online", 00:17:15.432 "raid_level": "raid1", 00:17:15.432 "superblock": true, 00:17:15.432 "num_base_bdevs": 2, 00:17:15.432 "num_base_bdevs_discovered": 1, 00:17:15.432 "num_base_bdevs_operational": 1, 00:17:15.432 "base_bdevs_list": [ 00:17:15.433 { 00:17:15.433 "name": null, 00:17:15.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.433 "is_configured": false, 00:17:15.433 "data_offset": 0, 00:17:15.433 "data_size": 7936 00:17:15.433 }, 00:17:15.433 { 00:17:15.433 "name": "BaseBdev2", 00:17:15.433 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:15.433 "is_configured": true, 00:17:15.433 "data_offset": 256, 00:17:15.433 "data_size": 7936 00:17:15.433 } 00:17:15.433 ] 00:17:15.433 }' 00:17:15.433 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.433 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.002 "name": "raid_bdev1", 00:17:16.002 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:16.002 "strip_size_kb": 0, 00:17:16.002 "state": "online", 00:17:16.002 "raid_level": "raid1", 00:17:16.002 "superblock": true, 00:17:16.002 "num_base_bdevs": 2, 00:17:16.002 "num_base_bdevs_discovered": 1, 00:17:16.002 "num_base_bdevs_operational": 1, 00:17:16.002 "base_bdevs_list": [ 00:17:16.002 { 00:17:16.002 "name": null, 00:17:16.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.002 "is_configured": false, 00:17:16.002 "data_offset": 0, 00:17:16.002 "data_size": 7936 00:17:16.002 }, 00:17:16.002 { 00:17:16.002 "name": "BaseBdev2", 00:17:16.002 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:16.002 "is_configured": true, 00:17:16.002 "data_offset": 256, 00:17:16.002 "data_size": 7936 00:17:16.002 } 00:17:16.002 ] 00:17:16.002 }' 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.002 02:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:16.002 [2024-12-07 02:50:27.018810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:16.002 [2024-12-07 02:50:27.018966] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:16.002 [2024-12-07 02:50:27.018979] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:16.002 request: 00:17:16.002 { 00:17:16.002 "base_bdev": "BaseBdev1", 00:17:16.002 "raid_bdev": "raid_bdev1", 00:17:16.002 "method": "bdev_raid_add_base_bdev", 00:17:16.002 "req_id": 1 00:17:16.002 } 00:17:16.002 Got JSON-RPC error response 00:17:16.002 response: 00:17:16.002 { 00:17:16.002 "code": -22, 00:17:16.002 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:16.002 } 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.002 02:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.386 "name": "raid_bdev1", 00:17:17.386 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:17.386 "strip_size_kb": 0, 00:17:17.386 "state": "online", 00:17:17.386 "raid_level": "raid1", 00:17:17.386 "superblock": true, 00:17:17.386 "num_base_bdevs": 2, 00:17:17.386 "num_base_bdevs_discovered": 1, 00:17:17.386 "num_base_bdevs_operational": 1, 00:17:17.386 "base_bdevs_list": [ 00:17:17.386 { 00:17:17.386 "name": null, 00:17:17.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.386 "is_configured": false, 00:17:17.386 "data_offset": 0, 00:17:17.386 "data_size": 7936 00:17:17.386 }, 00:17:17.386 { 00:17:17.386 "name": "BaseBdev2", 00:17:17.386 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:17.386 "is_configured": true, 00:17:17.386 "data_offset": 256, 00:17:17.386 "data_size": 7936 00:17:17.386 } 00:17:17.386 ] 00:17:17.386 }' 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:17.386 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.657 "name": "raid_bdev1", 00:17:17.657 "uuid": "2638ba02-dbb7-4bad-b0bc-f7f6d2397f4d", 00:17:17.657 "strip_size_kb": 0, 00:17:17.657 "state": "online", 00:17:17.657 "raid_level": "raid1", 00:17:17.657 "superblock": true, 00:17:17.657 "num_base_bdevs": 2, 00:17:17.657 "num_base_bdevs_discovered": 1, 00:17:17.657 "num_base_bdevs_operational": 1, 00:17:17.657 "base_bdevs_list": [ 00:17:17.657 { 00:17:17.657 "name": null, 00:17:17.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.657 "is_configured": false, 00:17:17.657 "data_offset": 0, 00:17:17.657 "data_size": 7936 00:17:17.657 }, 00:17:17.657 { 00:17:17.657 "name": "BaseBdev2", 00:17:17.657 "uuid": "75dabdc6-ffb1-56ff-8a18-b77644f3773b", 00:17:17.657 "is_configured": true, 00:17:17.657 "data_offset": 256, 00:17:17.657 "data_size": 7936 00:17:17.657 } 00:17:17.657 ] 00:17:17.657 }' 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 99594 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 99594 ']' 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 99594 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99594 00:17:17.657 killing process with pid 99594 00:17:17.657 Received shutdown signal, test time was about 60.000000 seconds 00:17:17.657 00:17:17.657 Latency(us) 00:17:17.657 [2024-12-07T02:50:28.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.657 [2024-12-07T02:50:28.735Z] =================================================================================================================== 00:17:17.657 [2024-12-07T02:50:28.735Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99594' 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 99594 00:17:17.657 [2024-12-07 02:50:28.600397] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:17.657 [2024-12-07 02:50:28.600517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.657 [2024-12-07 02:50:28.600571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.657 [2024-12-07 02:50:28.600592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000006600 name raid_bdev1, state offline 00:17:17.657 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 99594 00:17:17.657 [2024-12-07 02:50:28.633695] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.917 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:17.917 00:17:17.917 real 0m16.039s 00:17:17.917 user 0m21.351s 00:17:17.917 sys 0m1.666s 00:17:17.917 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:17.917 02:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:17.918 ************************************ 00:17:17.918 END TEST raid_rebuild_test_sb_md_interleaved 00:17:17.918 ************************************ 00:17:17.918 02:50:28 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:17.918 02:50:28 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:17.918 02:50:28 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 99594 ']' 00:17:17.918 02:50:28 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 99594 00:17:17.918 02:50:28 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:17.918 ************************************ 00:17:17.918 END TEST bdev_raid 00:17:17.918 ************************************ 00:17:17.918 00:17:17.918 real 10m11.447s 00:17:17.918 user 14m15.157s 00:17:17.918 sys 1m58.192s 00:17:17.918 02:50:28 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:17.918 02:50:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.177 02:50:29 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:18.177 02:50:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:18.177 02:50:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:18.177 02:50:29 -- common/autotest_common.sh@10 -- # set +x 00:17:18.177 ************************************ 00:17:18.177 START TEST spdkcli_raid 00:17:18.177 ************************************ 00:17:18.177 02:50:29 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:18.177 * Looking for test storage... 00:17:18.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:18.177 02:50:29 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:18.177 02:50:29 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:17:18.177 02:50:29 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:18.437 02:50:29 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.437 02:50:29 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:18.437 02:50:29 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.437 02:50:29 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:18.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.437 --rc genhtml_branch_coverage=1 00:17:18.437 --rc genhtml_function_coverage=1 00:17:18.437 --rc genhtml_legend=1 00:17:18.437 --rc geninfo_all_blocks=1 00:17:18.437 --rc geninfo_unexecuted_blocks=1 00:17:18.437 00:17:18.437 ' 00:17:18.437 02:50:29 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:18.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.437 --rc genhtml_branch_coverage=1 00:17:18.437 --rc genhtml_function_coverage=1 00:17:18.437 --rc genhtml_legend=1 00:17:18.437 --rc geninfo_all_blocks=1 00:17:18.437 --rc geninfo_unexecuted_blocks=1 00:17:18.437 00:17:18.437 ' 00:17:18.437 02:50:29 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:18.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.437 --rc genhtml_branch_coverage=1 00:17:18.437 --rc genhtml_function_coverage=1 00:17:18.437 --rc genhtml_legend=1 00:17:18.438 --rc geninfo_all_blocks=1 00:17:18.438 --rc geninfo_unexecuted_blocks=1 00:17:18.438 00:17:18.438 ' 00:17:18.438 02:50:29 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:18.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.438 --rc genhtml_branch_coverage=1 00:17:18.438 --rc genhtml_function_coverage=1 00:17:18.438 --rc genhtml_legend=1 00:17:18.438 --rc geninfo_all_blocks=1 00:17:18.438 --rc geninfo_unexecuted_blocks=1 00:17:18.438 00:17:18.438 ' 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:18.438 02:50:29 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:18.438 02:50:29 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:18.438 02:50:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=100258 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:18.438 02:50:29 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 100258 00:17:18.438 02:50:29 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 100258 ']' 00:17:18.438 02:50:29 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.438 02:50:29 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:18.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.438 02:50:29 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.438 02:50:29 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:18.438 02:50:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.438 [2024-12-07 02:50:29.410317] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:18.438 [2024-12-07 02:50:29.410923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100258 ] 00:17:18.698 [2024-12-07 02:50:29.580300] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:18.698 [2024-12-07 02:50:29.628381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.698 [2024-12-07 02:50:29.628481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.267 02:50:30 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:19.267 02:50:30 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:17:19.267 02:50:30 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:19.267 02:50:30 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:19.267 02:50:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.267 02:50:30 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:19.267 02:50:30 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:19.267 02:50:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.267 02:50:30 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:19.267 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:19.267 ' 00:17:21.170 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:21.170 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:21.170 02:50:31 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:21.170 02:50:31 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:21.170 02:50:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.170 02:50:31 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:21.170 02:50:31 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:21.170 02:50:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.170 02:50:31 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:21.170 ' 00:17:22.107 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:22.107 02:50:33 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:22.107 02:50:33 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:22.107 02:50:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.107 02:50:33 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:22.107 02:50:33 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:22.107 02:50:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.107 02:50:33 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:22.107 02:50:33 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:22.675 02:50:33 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:22.675 02:50:33 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:22.675 02:50:33 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:22.675 02:50:33 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:22.675 02:50:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.675 02:50:33 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:22.675 02:50:33 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:22.675 02:50:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.675 02:50:33 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:22.675 ' 00:17:23.611 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:17:23.870 02:50:34 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:17:23.870 02:50:34 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:23.870 02:50:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.870 02:50:34 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:17:23.870 02:50:34 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:23.870 02:50:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.870 02:50:34 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:17:23.870 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:17:23.870 ' 00:17:25.289 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:17:25.289 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:17:25.289 02:50:36 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:17:25.289 02:50:36 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:25.289 02:50:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:25.289 02:50:36 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 100258 00:17:25.289 02:50:36 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100258 ']' 00:17:25.289 02:50:36 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100258 00:17:25.289 02:50:36 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:17:25.289 02:50:36 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.289 02:50:36 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100258 00:17:25.549 02:50:36 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:25.549 02:50:36 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:25.549 killing process with pid 100258 00:17:25.549 02:50:36 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100258' 00:17:25.549 02:50:36 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 100258 00:17:25.549 02:50:36 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 100258 00:17:25.809 02:50:36 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:17:25.809 02:50:36 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 100258 ']' 00:17:25.809 02:50:36 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 100258 00:17:25.809 02:50:36 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 100258 ']' 00:17:25.809 02:50:36 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 100258 00:17:25.809 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (100258) - No such process 00:17:25.809 Process with pid 100258 is not found 00:17:25.809 02:50:36 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 100258 is not found' 00:17:25.809 02:50:36 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:17:25.809 02:50:36 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:17:25.809 02:50:36 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:17:25.809 02:50:36 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:17:25.809 00:17:25.809 real 0m7.748s 00:17:25.809 user 0m16.278s 00:17:25.809 sys 0m1.132s 00:17:25.809 02:50:36 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:25.809 02:50:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:25.809 ************************************ 00:17:25.809 END TEST spdkcli_raid 00:17:25.809 ************************************ 00:17:25.809 02:50:36 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:25.809 02:50:36 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:25.809 02:50:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.809 02:50:36 -- common/autotest_common.sh@10 -- # set +x 00:17:25.809 ************************************ 00:17:25.809 START TEST blockdev_raid5f 00:17:25.809 ************************************ 00:17:25.809 02:50:36 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:17:26.070 * Looking for test storage... 00:17:26.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:26.070 02:50:36 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:26.070 02:50:36 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:17:26.070 02:50:36 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:26.070 02:50:37 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:26.070 02:50:37 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:17:26.070 02:50:37 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.070 02:50:37 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:26.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.070 --rc genhtml_branch_coverage=1 00:17:26.070 --rc genhtml_function_coverage=1 00:17:26.070 --rc genhtml_legend=1 00:17:26.070 --rc geninfo_all_blocks=1 00:17:26.070 --rc geninfo_unexecuted_blocks=1 00:17:26.070 00:17:26.070 ' 00:17:26.070 02:50:37 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:26.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.070 --rc genhtml_branch_coverage=1 00:17:26.070 --rc genhtml_function_coverage=1 00:17:26.070 --rc genhtml_legend=1 00:17:26.070 --rc geninfo_all_blocks=1 00:17:26.070 --rc geninfo_unexecuted_blocks=1 00:17:26.070 00:17:26.070 ' 00:17:26.070 02:50:37 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:26.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.070 --rc genhtml_branch_coverage=1 00:17:26.070 --rc genhtml_function_coverage=1 00:17:26.070 --rc genhtml_legend=1 00:17:26.070 --rc geninfo_all_blocks=1 00:17:26.070 --rc geninfo_unexecuted_blocks=1 00:17:26.070 00:17:26.070 ' 00:17:26.070 02:50:37 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:26.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.070 --rc genhtml_branch_coverage=1 00:17:26.070 --rc genhtml_function_coverage=1 00:17:26.070 --rc genhtml_legend=1 00:17:26.070 --rc geninfo_all_blocks=1 00:17:26.070 --rc geninfo_unexecuted_blocks=1 00:17:26.070 00:17:26.070 ' 00:17:26.070 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:26.070 02:50:37 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:17:26.070 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:26.070 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=100518 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:26.071 02:50:37 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 100518 00:17:26.071 02:50:37 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 100518 ']' 00:17:26.071 02:50:37 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.071 02:50:37 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:26.071 02:50:37 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.071 02:50:37 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:26.071 02:50:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:26.331 [2024-12-07 02:50:37.221369] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:26.331 [2024-12-07 02:50:37.221607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100518 ] 00:17:26.331 [2024-12-07 02:50:37.388498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.590 [2024-12-07 02:50:37.435926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.159 02:50:38 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:27.159 02:50:38 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:17:27.159 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:27.159 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:17:27.159 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:17:27.159 02:50:38 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.159 02:50:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:27.159 Malloc0 00:17:27.159 Malloc1 00:17:27.159 Malloc2 00:17:27.159 02:50:38 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.159 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:27.159 02:50:38 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.159 02:50:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:27.159 02:50:38 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.159 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:17:27.159 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:27.159 02:50:38 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.159 02:50:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:27.159 02:50:38 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.159 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:27.159 02:50:38 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.159 02:50:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:27.160 02:50:38 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.160 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:27.160 02:50:38 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.160 02:50:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:27.160 02:50:38 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.160 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:27.160 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:27.160 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:27.160 02:50:38 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.160 02:50:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:27.160 02:50:38 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.160 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:27.160 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:27.160 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "aa4dd46a-f72c-4d8c-ab7c-b71c069fde0a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "aa4dd46a-f72c-4d8c-ab7c-b71c069fde0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "aa4dd46a-f72c-4d8c-ab7c-b71c069fde0a",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "5144ba3b-e17a-4179-ae04-9e32e0f5fdb9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3c4eddb1-65f2-477c-b9d2-a7d0202f943e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "52ac2981-ea55-4aa4-9ddd-527595755831",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:27.420 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:27.420 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:17:27.420 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:27.420 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 100518 00:17:27.420 02:50:38 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 100518 ']' 00:17:27.420 02:50:38 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 100518 00:17:27.420 02:50:38 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:17:27.420 02:50:38 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:27.420 02:50:38 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100518 00:17:27.420 02:50:38 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:27.420 02:50:38 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:27.420 killing process with pid 100518 00:17:27.420 02:50:38 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100518' 00:17:27.420 02:50:38 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 100518 00:17:27.420 02:50:38 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 100518 00:17:27.680 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:27.680 02:50:38 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:27.680 02:50:38 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:27.680 02:50:38 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:27.680 02:50:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:27.680 ************************************ 00:17:27.680 START TEST bdev_hello_world 00:17:27.680 ************************************ 00:17:27.680 02:50:38 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:17:27.939 [2024-12-07 02:50:38.826756] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:27.939 [2024-12-07 02:50:38.826948] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100553 ] 00:17:27.939 [2024-12-07 02:50:38.986422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.199 [2024-12-07 02:50:39.033522] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.199 [2024-12-07 02:50:39.231445] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:28.199 [2024-12-07 02:50:39.231505] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:17:28.199 [2024-12-07 02:50:39.231524] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:28.199 [2024-12-07 02:50:39.231957] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:28.199 [2024-12-07 02:50:39.232108] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:28.199 [2024-12-07 02:50:39.232137] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:28.199 [2024-12-07 02:50:39.232197] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:28.199 00:17:28.199 [2024-12-07 02:50:39.232227] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:28.459 ************************************ 00:17:28.459 00:17:28.459 real 0m0.742s 00:17:28.459 user 0m0.400s 00:17:28.459 sys 0m0.224s 00:17:28.459 02:50:39 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:28.459 02:50:39 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:28.459 END TEST bdev_hello_world 00:17:28.459 ************************************ 00:17:28.719 02:50:39 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:28.719 02:50:39 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:28.719 02:50:39 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:28.719 02:50:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:28.719 ************************************ 00:17:28.719 START TEST bdev_bounds 00:17:28.719 ************************************ 00:17:28.719 02:50:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:17:28.719 Process bdevio pid: 100584 00:17:28.719 02:50:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=100584 00:17:28.719 02:50:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:28.719 02:50:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:28.719 02:50:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 100584' 00:17:28.719 02:50:39 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 100584 00:17:28.719 02:50:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 100584 ']' 00:17:28.719 02:50:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.719 02:50:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:28.719 02:50:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.719 02:50:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:28.719 02:50:39 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:28.719 [2024-12-07 02:50:39.644563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:28.719 [2024-12-07 02:50:39.644772] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100584 ] 00:17:28.979 [2024-12-07 02:50:39.805361] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:28.979 [2024-12-07 02:50:39.852259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.979 [2024-12-07 02:50:39.852415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.979 [2024-12-07 02:50:39.852488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.546 02:50:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.546 02:50:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:17:29.546 02:50:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:29.546 I/O targets: 00:17:29.546 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:17:29.546 00:17:29.546 00:17:29.546 CUnit - A unit testing framework for C - Version 2.1-3 00:17:29.546 http://cunit.sourceforge.net/ 00:17:29.546 00:17:29.546 00:17:29.546 Suite: bdevio tests on: raid5f 00:17:29.546 Test: blockdev write read block ...passed 00:17:29.546 Test: blockdev write zeroes read block ...passed 00:17:29.546 Test: blockdev write zeroes read no split ...passed 00:17:29.804 Test: blockdev write zeroes read split ...passed 00:17:29.804 Test: blockdev write zeroes read split partial ...passed 00:17:29.804 Test: blockdev reset ...passed 00:17:29.804 Test: blockdev write read 8 blocks ...passed 00:17:29.804 Test: blockdev write read size > 128k ...passed 00:17:29.804 Test: blockdev write read invalid size ...passed 00:17:29.804 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.804 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.804 Test: blockdev write read max offset ...passed 00:17:29.804 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.804 Test: blockdev writev readv 8 blocks ...passed 00:17:29.804 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.804 Test: blockdev writev readv block ...passed 00:17:29.804 Test: blockdev writev readv size > 128k ...passed 00:17:29.805 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.805 Test: blockdev comparev and writev ...passed 00:17:29.805 Test: blockdev nvme passthru rw ...passed 00:17:29.805 Test: blockdev nvme passthru vendor specific ...passed 00:17:29.805 Test: blockdev nvme admin passthru ...passed 00:17:29.805 Test: blockdev copy ...passed 00:17:29.805 00:17:29.805 Run Summary: Type Total Ran Passed Failed Inactive 00:17:29.805 suites 1 1 n/a 0 0 00:17:29.805 tests 23 23 23 0 0 00:17:29.805 asserts 130 130 130 0 n/a 00:17:29.805 00:17:29.805 Elapsed time = 0.332 seconds 00:17:29.805 0 00:17:29.805 02:50:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 100584 00:17:29.805 02:50:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 100584 ']' 00:17:29.805 02:50:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 100584 00:17:29.805 02:50:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:17:29.805 02:50:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.805 02:50:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100584 00:17:29.805 killing process with pid 100584 00:17:29.805 02:50:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:29.805 02:50:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:29.805 02:50:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100584' 00:17:29.805 02:50:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 100584 00:17:29.805 02:50:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 100584 00:17:30.065 02:50:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:30.065 00:17:30.065 real 0m1.474s 00:17:30.065 user 0m3.488s 00:17:30.065 sys 0m0.359s 00:17:30.065 02:50:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:30.065 ************************************ 00:17:30.065 END TEST bdev_bounds 00:17:30.065 ************************************ 00:17:30.065 02:50:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:30.065 02:50:41 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:30.065 02:50:41 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:30.065 02:50:41 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:30.065 02:50:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:30.065 ************************************ 00:17:30.065 START TEST bdev_nbd 00:17:30.065 ************************************ 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100633 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100633 /var/tmp/spdk-nbd.sock 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100633 ']' 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:30.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.065 02:50:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:30.325 [2024-12-07 02:50:41.213222] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:30.325 [2024-12-07 02:50:41.213492] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.325 [2024-12-07 02:50:41.380094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.585 [2024-12-07 02:50:41.426732] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:31.154 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.413 1+0 records in 00:17:31.413 1+0 records out 00:17:31.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004657 s, 8.8 MB/s 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:31.413 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:31.413 { 00:17:31.414 "nbd_device": "/dev/nbd0", 00:17:31.414 "bdev_name": "raid5f" 00:17:31.414 } 00:17:31.414 ]' 00:17:31.414 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:31.414 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:31.414 { 00:17:31.414 "nbd_device": "/dev/nbd0", 00:17:31.414 "bdev_name": "raid5f" 00:17:31.414 } 00:17:31.414 ]' 00:17:31.414 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.673 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:31.933 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:31.933 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:31.933 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:31.933 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:31.933 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:31.933 02:50:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:31.933 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:31.933 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:31.933 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:17:32.193 /dev/nbd0 00:17:32.193 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:32.194 1+0 records in 00:17:32.194 1+0 records out 00:17:32.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356994 s, 11.5 MB/s 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:32.194 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:32.454 { 00:17:32.454 "nbd_device": "/dev/nbd0", 00:17:32.454 "bdev_name": "raid5f" 00:17:32.454 } 00:17:32.454 ]' 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:32.454 { 00:17:32.454 "nbd_device": "/dev/nbd0", 00:17:32.454 "bdev_name": "raid5f" 00:17:32.454 } 00:17:32.454 ]' 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:32.454 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:32.714 256+0 records in 00:17:32.714 256+0 records out 00:17:32.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133584 s, 78.5 MB/s 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:32.714 256+0 records in 00:17:32.714 256+0 records out 00:17:32.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280527 s, 37.4 MB/s 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.714 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:32.974 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:32.974 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:32.974 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:32.974 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.974 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.974 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:32.974 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:32.974 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.974 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:32.974 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.974 02:50:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:32.974 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:33.234 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:33.234 malloc_lvol_verify 00:17:33.235 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:33.495 9e0375e2-211f-4659-8ee8-27b52298e537 00:17:33.495 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:33.755 705bd6d1-1256-44bb-9aeb-370f33addf3c 00:17:33.755 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:34.015 /dev/nbd0 00:17:34.015 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:34.015 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:34.015 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:34.015 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:34.015 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:34.015 mke2fs 1.47.0 (5-Feb-2023) 00:17:34.015 Discarding device blocks: 0/4096 done 00:17:34.015 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:34.015 00:17:34.015 Allocating group tables: 0/1 done 00:17:34.015 Writing inode tables: 0/1 done 00:17:34.015 Creating journal (1024 blocks): done 00:17:34.015 Writing superblocks and filesystem accounting information: 0/1 done 00:17:34.015 00:17:34.015 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:34.015 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:34.015 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:34.015 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:34.015 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:34.015 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.015 02:50:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:34.015 02:50:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:34.015 02:50:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:34.015 02:50:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:34.015 02:50:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.015 02:50:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.015 02:50:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:34.274 02:50:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:34.274 02:50:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.274 02:50:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100633 00:17:34.274 02:50:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100633 ']' 00:17:34.274 02:50:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100633 00:17:34.274 02:50:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:17:34.274 02:50:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.274 02:50:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100633 00:17:34.274 02:50:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:34.274 02:50:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:34.274 killing process with pid 100633 00:17:34.274 02:50:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100633' 00:17:34.274 02:50:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100633 00:17:34.274 02:50:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100633 00:17:34.533 02:50:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:34.533 00:17:34.533 real 0m4.304s 00:17:34.533 user 0m6.201s 00:17:34.533 sys 0m1.277s 00:17:34.533 02:50:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:34.533 02:50:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:34.533 ************************************ 00:17:34.533 END TEST bdev_nbd 00:17:34.533 ************************************ 00:17:34.533 02:50:45 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:34.533 02:50:45 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:17:34.533 02:50:45 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:17:34.533 02:50:45 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:34.533 02:50:45 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:34.533 02:50:45 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:34.533 02:50:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:34.533 ************************************ 00:17:34.533 START TEST bdev_fio 00:17:34.533 ************************************ 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:34.533 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:17:34.533 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:34.792 ************************************ 00:17:34.792 START TEST bdev_fio_rw_verify 00:17:34.792 ************************************ 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:34.792 02:50:45 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:35.052 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:35.052 fio-3.35 00:17:35.052 Starting 1 thread 00:17:47.275 00:17:47.275 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100820: Sat Dec 7 02:50:56 2024 00:17:47.275 read: IOPS=12.8k, BW=49.9MiB/s (52.3MB/s)(499MiB/10001msec) 00:17:47.275 slat (nsec): min=16657, max=72007, avg=18170.11, stdev=1748.33 00:17:47.275 clat (usec): min=11, max=306, avg=125.00, stdev=43.04 00:17:47.275 lat (usec): min=29, max=326, avg=143.17, stdev=43.22 00:17:47.275 clat percentiles (usec): 00:17:47.275 | 50.000th=[ 129], 99.000th=[ 206], 99.900th=[ 231], 99.990th=[ 265], 00:17:47.275 | 99.999th=[ 297] 00:17:47.275 write: IOPS=13.4k, BW=52.4MiB/s (54.9MB/s)(518MiB/9879msec); 0 zone resets 00:17:47.275 slat (usec): min=7, max=276, avg=15.98, stdev= 3.60 00:17:47.275 clat (usec): min=55, max=1565, avg=287.99, stdev=41.48 00:17:47.275 lat (usec): min=70, max=1841, avg=303.97, stdev=42.62 00:17:47.275 clat percentiles (usec): 00:17:47.275 | 50.000th=[ 293], 99.000th=[ 363], 99.900th=[ 570], 99.990th=[ 1270], 00:17:47.275 | 99.999th=[ 1549] 00:17:47.275 bw ( KiB/s): min=50576, max=55624, per=98.82%, avg=53026.11, stdev=1787.46, samples=19 00:17:47.275 iops : min=12644, max=13906, avg=13256.53, stdev=446.87, samples=19 00:17:47.275 lat (usec) : 20=0.01%, 50=0.01%, 100=17.07%, 250=41.00%, 500=41.86% 00:17:47.275 lat (usec) : 750=0.04%, 1000=0.01% 00:17:47.275 lat (msec) : 2=0.01% 00:17:47.275 cpu : usr=98.79%, sys=0.56%, ctx=26, majf=0, minf=13521 00:17:47.275 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:47.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.275 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.275 issued rwts: total=127807,132521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:47.275 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:47.275 00:17:47.275 Run status group 0 (all jobs): 00:17:47.275 READ: bw=49.9MiB/s (52.3MB/s), 49.9MiB/s-49.9MiB/s (52.3MB/s-52.3MB/s), io=499MiB (523MB), run=10001-10001msec 00:17:47.275 WRITE: bw=52.4MiB/s (54.9MB/s), 52.4MiB/s-52.4MiB/s (54.9MB/s-54.9MB/s), io=518MiB (543MB), run=9879-9879msec 00:17:47.275 ----------------------------------------------------- 00:17:47.275 Suppressions used: 00:17:47.275 count bytes template 00:17:47.275 1 7 /usr/src/fio/parse.c 00:17:47.275 735 70560 /usr/src/fio/iolog.c 00:17:47.275 1 8 libtcmalloc_minimal.so 00:17:47.275 1 904 libcrypto.so 00:17:47.275 ----------------------------------------------------- 00:17:47.275 00:17:47.275 00:17:47.275 real 0m11.203s 00:17:47.275 user 0m11.511s 00:17:47.275 sys 0m0.618s 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:47.275 ************************************ 00:17:47.275 END TEST bdev_fio_rw_verify 00:17:47.275 ************************************ 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "aa4dd46a-f72c-4d8c-ab7c-b71c069fde0a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "aa4dd46a-f72c-4d8c-ab7c-b71c069fde0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "aa4dd46a-f72c-4d8c-ab7c-b71c069fde0a",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "5144ba3b-e17a-4179-ae04-9e32e0f5fdb9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3c4eddb1-65f2-477c-b9d2-a7d0202f943e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "52ac2981-ea55-4aa4-9ddd-527595755831",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:47.275 /home/vagrant/spdk_repo/spdk 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:47.275 00:17:47.275 real 0m11.498s 00:17:47.275 user 0m11.626s 00:17:47.275 sys 0m0.769s 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:47.275 02:50:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:47.275 ************************************ 00:17:47.275 END TEST bdev_fio 00:17:47.275 ************************************ 00:17:47.275 02:50:57 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:47.275 02:50:57 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:47.275 02:50:57 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:47.275 02:50:57 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:47.275 02:50:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:47.275 ************************************ 00:17:47.275 START TEST bdev_verify 00:17:47.275 ************************************ 00:17:47.275 02:50:57 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:47.275 [2024-12-07 02:50:57.145071] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:47.275 [2024-12-07 02:50:57.145185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100967 ] 00:17:47.275 [2024-12-07 02:50:57.305844] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:47.275 [2024-12-07 02:50:57.361670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.275 [2024-12-07 02:50:57.361751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.275 Running I/O for 5 seconds... 00:17:48.915 11393.00 IOPS, 44.50 MiB/s [2024-12-07T02:51:00.931Z] 11411.00 IOPS, 44.57 MiB/s [2024-12-07T02:51:01.869Z] 11446.67 IOPS, 44.71 MiB/s [2024-12-07T02:51:02.808Z] 11412.00 IOPS, 44.58 MiB/s [2024-12-07T02:51:02.808Z] 11436.20 IOPS, 44.67 MiB/s 00:17:51.730 Latency(us) 00:17:51.730 [2024-12-07T02:51:02.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.730 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:51.730 Verification LBA range: start 0x0 length 0x2000 00:17:51.730 raid5f : 5.02 4569.95 17.85 0.00 0.00 42032.61 129.68 29763.07 00:17:51.730 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:51.730 Verification LBA range: start 0x2000 length 0x2000 00:17:51.730 raid5f : 5.01 6858.87 26.79 0.00 0.00 28035.80 316.59 21177.57 00:17:51.730 [2024-12-07T02:51:02.808Z] =================================================================================================================== 00:17:51.730 [2024-12-07T02:51:02.808Z] Total : 11428.81 44.64 0.00 0.00 33639.50 129.68 29763.07 00:17:51.990 00:17:51.990 real 0m5.789s 00:17:51.990 user 0m10.725s 00:17:51.990 sys 0m0.253s 00:17:51.990 02:51:02 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.990 02:51:02 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:51.990 ************************************ 00:17:51.990 END TEST bdev_verify 00:17:51.990 ************************************ 00:17:51.990 02:51:02 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:51.990 02:51:02 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:17:51.990 02:51:02 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.990 02:51:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:51.990 ************************************ 00:17:51.990 START TEST bdev_verify_big_io 00:17:51.990 ************************************ 00:17:51.990 02:51:02 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:51.990 [2024-12-07 02:51:03.012698] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:51.990 [2024-12-07 02:51:03.012857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101049 ] 00:17:52.250 [2024-12-07 02:51:03.177974] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:52.250 [2024-12-07 02:51:03.226502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.250 [2024-12-07 02:51:03.226645] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.509 Running I/O for 5 seconds... 00:17:54.828 633.00 IOPS, 39.56 MiB/s [2024-12-07T02:51:06.844Z] 761.00 IOPS, 47.56 MiB/s [2024-12-07T02:51:07.782Z] 803.67 IOPS, 50.23 MiB/s [2024-12-07T02:51:08.720Z] 808.75 IOPS, 50.55 MiB/s [2024-12-07T02:51:08.720Z] 812.40 IOPS, 50.77 MiB/s 00:17:57.642 Latency(us) 00:17:57.642 [2024-12-07T02:51:08.720Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.642 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:17:57.642 Verification LBA range: start 0x0 length 0x200 00:17:57.642 raid5f : 5.28 360.69 22.54 0.00 0.00 8779149.53 194.96 373641.06 00:17:57.642 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:17:57.642 Verification LBA range: start 0x200 length 0x200 00:17:57.642 raid5f : 5.22 462.54 28.91 0.00 0.00 6920558.79 201.22 296714.96 00:17:57.642 [2024-12-07T02:51:08.720Z] =================================================================================================================== 00:17:57.642 [2024-12-07T02:51:08.720Z] Total : 823.23 51.45 0.00 0.00 7740525.29 194.96 373641.06 00:17:57.902 00:17:57.902 real 0m6.034s 00:17:57.902 user 0m11.211s 00:17:57.902 sys 0m0.263s 00:17:57.902 02:51:08 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.902 02:51:08 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:17:57.902 ************************************ 00:17:57.902 END TEST bdev_verify_big_io 00:17:57.902 ************************************ 00:17:58.161 02:51:09 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:58.161 02:51:09 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:58.161 02:51:09 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:58.161 02:51:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:58.161 ************************************ 00:17:58.161 START TEST bdev_write_zeroes 00:17:58.161 ************************************ 00:17:58.161 02:51:09 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:58.161 [2024-12-07 02:51:09.123289] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:58.161 [2024-12-07 02:51:09.123455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101136 ] 00:17:58.420 [2024-12-07 02:51:09.287544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.420 [2024-12-07 02:51:09.340288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.679 Running I/O for 1 seconds... 00:17:59.618 30423.00 IOPS, 118.84 MiB/s 00:17:59.618 Latency(us) 00:17:59.618 [2024-12-07T02:51:10.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.618 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:59.618 raid5f : 1.01 30399.35 118.75 0.00 0.00 4196.94 1266.36 5752.29 00:17:59.618 [2024-12-07T02:51:10.696Z] =================================================================================================================== 00:17:59.618 [2024-12-07T02:51:10.696Z] Total : 30399.35 118.75 0.00 0.00 4196.94 1266.36 5752.29 00:17:59.878 00:17:59.878 real 0m1.754s 00:17:59.878 user 0m1.400s 00:17:59.878 sys 0m0.235s 00:17:59.878 02:51:10 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:59.878 02:51:10 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:59.878 ************************************ 00:17:59.878 END TEST bdev_write_zeroes 00:17:59.878 ************************************ 00:17:59.878 02:51:10 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.878 02:51:10 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:59.878 02:51:10 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:59.878 02:51:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:59.878 ************************************ 00:17:59.878 START TEST bdev_json_nonenclosed 00:17:59.878 ************************************ 00:17:59.878 02:51:10 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:59.878 [2024-12-07 02:51:10.947439] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:17:59.878 [2024-12-07 02:51:10.947566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101167 ] 00:18:00.139 [2024-12-07 02:51:11.105899] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.139 [2024-12-07 02:51:11.155293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.139 [2024-12-07 02:51:11.155391] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:00.139 [2024-12-07 02:51:11.155414] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:00.139 [2024-12-07 02:51:11.155428] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:00.399 00:18:00.399 real 0m0.404s 00:18:00.399 user 0m0.172s 00:18:00.399 sys 0m0.128s 00:18:00.399 02:51:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.399 02:51:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:00.399 ************************************ 00:18:00.399 END TEST bdev_json_nonenclosed 00:18:00.399 ************************************ 00:18:00.399 02:51:11 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:00.399 02:51:11 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:18:00.399 02:51:11 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:00.399 02:51:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:00.399 ************************************ 00:18:00.399 START TEST bdev_json_nonarray 00:18:00.399 ************************************ 00:18:00.399 02:51:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:00.399 [2024-12-07 02:51:11.423556] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 23.11.0 initialization... 00:18:00.399 [2024-12-07 02:51:11.423693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101198 ] 00:18:00.660 [2024-12-07 02:51:11.582079] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.660 [2024-12-07 02:51:11.637078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.660 [2024-12-07 02:51:11.637194] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:00.660 [2024-12-07 02:51:11.637218] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:00.660 [2024-12-07 02:51:11.637239] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:00.920 00:18:00.920 real 0m0.409s 00:18:00.920 user 0m0.178s 00:18:00.920 sys 0m0.127s 00:18:00.920 02:51:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.920 02:51:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:00.920 ************************************ 00:18:00.920 END TEST bdev_json_nonarray 00:18:00.920 ************************************ 00:18:00.920 02:51:11 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:18:00.920 02:51:11 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:18:00.920 02:51:11 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:18:00.920 02:51:11 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:00.920 02:51:11 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:18:00.920 02:51:11 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:00.920 02:51:11 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:00.920 02:51:11 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:18:00.920 02:51:11 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:18:00.920 02:51:11 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:18:00.920 02:51:11 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:18:00.920 00:18:00.920 real 0m34.965s 00:18:00.920 user 0m47.348s 00:18:00.920 sys 0m4.749s 00:18:00.921 02:51:11 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:00.921 02:51:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:00.921 ************************************ 00:18:00.921 END TEST blockdev_raid5f 00:18:00.921 ************************************ 00:18:00.921 02:51:11 -- spdk/autotest.sh@194 -- # uname -s 00:18:00.921 02:51:11 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:18:00.921 02:51:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:00.921 02:51:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:00.921 02:51:11 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@256 -- # timing_exit lib 00:18:00.921 02:51:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.921 02:51:11 -- common/autotest_common.sh@10 -- # set +x 00:18:00.921 02:51:11 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:18:00.921 02:51:11 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:18:00.921 02:51:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:18:00.921 02:51:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:18:00.921 02:51:11 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:18:00.921 02:51:11 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:18:00.921 02:51:11 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:18:00.921 02:51:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:00.921 02:51:11 -- common/autotest_common.sh@10 -- # set +x 00:18:00.921 02:51:11 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:18:00.921 02:51:11 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:18:00.921 02:51:11 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:18:00.921 02:51:11 -- common/autotest_common.sh@10 -- # set +x 00:18:03.462 INFO: APP EXITING 00:18:03.462 INFO: killing all VMs 00:18:03.462 INFO: killing vhost app 00:18:03.462 INFO: EXIT DONE 00:18:03.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:03.739 Waiting for block devices as requested 00:18:03.739 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:04.021 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:05.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:05.012 Cleaning 00:18:05.012 Removing: /var/run/dpdk/spdk0/config 00:18:05.012 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:05.012 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:05.012 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:05.012 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:05.012 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:05.012 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:05.012 Removing: /dev/shm/spdk_tgt_trace.pid69209 00:18:05.012 Removing: /var/run/dpdk/spdk0 00:18:05.012 Removing: /var/run/dpdk/spdk_pid100258 00:18:05.012 Removing: /var/run/dpdk/spdk_pid100518 00:18:05.012 Removing: /var/run/dpdk/spdk_pid100553 00:18:05.012 Removing: /var/run/dpdk/spdk_pid100584 00:18:05.012 Removing: /var/run/dpdk/spdk_pid100805 00:18:05.012 Removing: /var/run/dpdk/spdk_pid100967 00:18:05.012 Removing: /var/run/dpdk/spdk_pid101049 00:18:05.012 Removing: /var/run/dpdk/spdk_pid101136 00:18:05.012 Removing: /var/run/dpdk/spdk_pid101167 00:18:05.012 Removing: /var/run/dpdk/spdk_pid101198 00:18:05.012 Removing: /var/run/dpdk/spdk_pid69045 00:18:05.012 Removing: /var/run/dpdk/spdk_pid69209 00:18:05.012 Removing: /var/run/dpdk/spdk_pid69416 00:18:05.012 Removing: /var/run/dpdk/spdk_pid69503 00:18:05.012 Removing: /var/run/dpdk/spdk_pid69532 00:18:05.012 Removing: /var/run/dpdk/spdk_pid69638 00:18:05.012 Removing: /var/run/dpdk/spdk_pid69656 00:18:05.012 Removing: /var/run/dpdk/spdk_pid69844 00:18:05.012 Removing: /var/run/dpdk/spdk_pid69923 00:18:05.012 Removing: /var/run/dpdk/spdk_pid69997 00:18:05.012 Removing: /var/run/dpdk/spdk_pid70097 00:18:05.012 Removing: /var/run/dpdk/spdk_pid70183 00:18:05.012 Removing: /var/run/dpdk/spdk_pid70217 00:18:05.012 Removing: /var/run/dpdk/spdk_pid70259 00:18:05.012 Removing: /var/run/dpdk/spdk_pid70330 00:18:05.012 Removing: /var/run/dpdk/spdk_pid70441 00:18:05.012 Removing: /var/run/dpdk/spdk_pid70872 00:18:05.012 Removing: /var/run/dpdk/spdk_pid70927 00:18:05.012 Removing: /var/run/dpdk/spdk_pid70980 00:18:05.012 Removing: /var/run/dpdk/spdk_pid70996 00:18:05.012 Removing: /var/run/dpdk/spdk_pid71065 00:18:05.012 Removing: /var/run/dpdk/spdk_pid71081 00:18:05.012 Removing: /var/run/dpdk/spdk_pid71156 00:18:05.012 Removing: /var/run/dpdk/spdk_pid71166 00:18:05.012 Removing: /var/run/dpdk/spdk_pid71219 00:18:05.012 Removing: /var/run/dpdk/spdk_pid71237 00:18:05.012 Removing: /var/run/dpdk/spdk_pid71285 00:18:05.012 Removing: /var/run/dpdk/spdk_pid71297 00:18:05.012 Removing: /var/run/dpdk/spdk_pid71437 00:18:05.272 Removing: /var/run/dpdk/spdk_pid71478 00:18:05.272 Removing: /var/run/dpdk/spdk_pid71557 00:18:05.272 Removing: /var/run/dpdk/spdk_pid72758 00:18:05.272 Removing: /var/run/dpdk/spdk_pid72953 00:18:05.272 Removing: /var/run/dpdk/spdk_pid73088 00:18:05.272 Removing: /var/run/dpdk/spdk_pid73703 00:18:05.272 Removing: /var/run/dpdk/spdk_pid73904 00:18:05.272 Removing: /var/run/dpdk/spdk_pid74033 00:18:05.272 Removing: /var/run/dpdk/spdk_pid74643 00:18:05.272 Removing: /var/run/dpdk/spdk_pid74962 00:18:05.272 Removing: /var/run/dpdk/spdk_pid75098 00:18:05.272 Removing: /var/run/dpdk/spdk_pid76444 00:18:05.272 Removing: /var/run/dpdk/spdk_pid76686 00:18:05.272 Removing: /var/run/dpdk/spdk_pid76822 00:18:05.272 Removing: /var/run/dpdk/spdk_pid78168 00:18:05.272 Removing: /var/run/dpdk/spdk_pid78410 00:18:05.272 Removing: /var/run/dpdk/spdk_pid78545 00:18:05.272 Removing: /var/run/dpdk/spdk_pid79886 00:18:05.272 Removing: /var/run/dpdk/spdk_pid80315 00:18:05.272 Removing: /var/run/dpdk/spdk_pid80449 00:18:05.272 Removing: /var/run/dpdk/spdk_pid81886 00:18:05.272 Removing: /var/run/dpdk/spdk_pid82134 00:18:05.272 Removing: /var/run/dpdk/spdk_pid82274 00:18:05.272 Removing: /var/run/dpdk/spdk_pid83704 00:18:05.272 Removing: /var/run/dpdk/spdk_pid83958 00:18:05.272 Removing: /var/run/dpdk/spdk_pid84087 00:18:05.272 Removing: /var/run/dpdk/spdk_pid85528 00:18:05.272 Removing: /var/run/dpdk/spdk_pid85999 00:18:05.272 Removing: /var/run/dpdk/spdk_pid86128 00:18:05.272 Removing: /var/run/dpdk/spdk_pid86260 00:18:05.272 Removing: /var/run/dpdk/spdk_pid86665 00:18:05.272 Removing: /var/run/dpdk/spdk_pid87390 00:18:05.272 Removing: /var/run/dpdk/spdk_pid87755 00:18:05.272 Removing: /var/run/dpdk/spdk_pid88446 00:18:05.272 Removing: /var/run/dpdk/spdk_pid88874 00:18:05.272 Removing: /var/run/dpdk/spdk_pid89611 00:18:05.272 Removing: /var/run/dpdk/spdk_pid89998 00:18:05.272 Removing: /var/run/dpdk/spdk_pid91924 00:18:05.272 Removing: /var/run/dpdk/spdk_pid92351 00:18:05.272 Removing: /var/run/dpdk/spdk_pid92780 00:18:05.272 Removing: /var/run/dpdk/spdk_pid94814 00:18:05.272 Removing: /var/run/dpdk/spdk_pid95288 00:18:05.272 Removing: /var/run/dpdk/spdk_pid95769 00:18:05.272 Removing: /var/run/dpdk/spdk_pid96802 00:18:05.272 Removing: /var/run/dpdk/spdk_pid97114 00:18:05.272 Removing: /var/run/dpdk/spdk_pid98040 00:18:05.272 Removing: /var/run/dpdk/spdk_pid98352 00:18:05.272 Removing: /var/run/dpdk/spdk_pid99271 00:18:05.272 Removing: /var/run/dpdk/spdk_pid99594 00:18:05.272 Clean 00:18:05.532 02:51:16 -- common/autotest_common.sh@1451 -- # return 0 00:18:05.532 02:51:16 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:18:05.532 02:51:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:05.532 02:51:16 -- common/autotest_common.sh@10 -- # set +x 00:18:05.532 02:51:16 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:18:05.532 02:51:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:05.532 02:51:16 -- common/autotest_common.sh@10 -- # set +x 00:18:05.532 02:51:16 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:05.532 02:51:16 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:18:05.532 02:51:16 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:18:05.532 02:51:16 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:18:05.532 02:51:16 -- spdk/autotest.sh@394 -- # hostname 00:18:05.532 02:51:16 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:18:05.792 geninfo: WARNING: invalid characters removed from testname! 00:18:27.761 02:51:38 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:30.304 02:51:40 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:32.215 02:51:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:34.125 02:51:45 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:36.032 02:51:47 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:38.572 02:51:49 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:18:40.482 02:51:51 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:18:40.482 02:51:51 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:18:40.482 02:51:51 -- common/autotest_common.sh@1681 -- $ lcov --version 00:18:40.482 02:51:51 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:18:40.482 02:51:51 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:18:40.482 02:51:51 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:18:40.482 02:51:51 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:18:40.482 02:51:51 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:18:40.482 02:51:51 -- scripts/common.sh@336 -- $ IFS=.-: 00:18:40.482 02:51:51 -- scripts/common.sh@336 -- $ read -ra ver1 00:18:40.482 02:51:51 -- scripts/common.sh@337 -- $ IFS=.-: 00:18:40.482 02:51:51 -- scripts/common.sh@337 -- $ read -ra ver2 00:18:40.482 02:51:51 -- scripts/common.sh@338 -- $ local 'op=<' 00:18:40.482 02:51:51 -- scripts/common.sh@340 -- $ ver1_l=2 00:18:40.482 02:51:51 -- scripts/common.sh@341 -- $ ver2_l=1 00:18:40.482 02:51:51 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:18:40.482 02:51:51 -- scripts/common.sh@344 -- $ case "$op" in 00:18:40.482 02:51:51 -- scripts/common.sh@345 -- $ : 1 00:18:40.482 02:51:51 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:18:40.482 02:51:51 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.482 02:51:51 -- scripts/common.sh@365 -- $ decimal 1 00:18:40.482 02:51:51 -- scripts/common.sh@353 -- $ local d=1 00:18:40.482 02:51:51 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:18:40.482 02:51:51 -- scripts/common.sh@355 -- $ echo 1 00:18:40.482 02:51:51 -- scripts/common.sh@365 -- $ ver1[v]=1 00:18:40.482 02:51:51 -- scripts/common.sh@366 -- $ decimal 2 00:18:40.482 02:51:51 -- scripts/common.sh@353 -- $ local d=2 00:18:40.482 02:51:51 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:18:40.482 02:51:51 -- scripts/common.sh@355 -- $ echo 2 00:18:40.482 02:51:51 -- scripts/common.sh@366 -- $ ver2[v]=2 00:18:40.482 02:51:51 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:18:40.482 02:51:51 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:18:40.482 02:51:51 -- scripts/common.sh@368 -- $ return 0 00:18:40.482 02:51:51 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.482 02:51:51 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:18:40.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.482 --rc genhtml_branch_coverage=1 00:18:40.482 --rc genhtml_function_coverage=1 00:18:40.482 --rc genhtml_legend=1 00:18:40.482 --rc geninfo_all_blocks=1 00:18:40.482 --rc geninfo_unexecuted_blocks=1 00:18:40.482 00:18:40.482 ' 00:18:40.482 02:51:51 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:18:40.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.482 --rc genhtml_branch_coverage=1 00:18:40.482 --rc genhtml_function_coverage=1 00:18:40.482 --rc genhtml_legend=1 00:18:40.482 --rc geninfo_all_blocks=1 00:18:40.482 --rc geninfo_unexecuted_blocks=1 00:18:40.482 00:18:40.482 ' 00:18:40.482 02:51:51 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:18:40.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.482 --rc genhtml_branch_coverage=1 00:18:40.482 --rc genhtml_function_coverage=1 00:18:40.482 --rc genhtml_legend=1 00:18:40.482 --rc geninfo_all_blocks=1 00:18:40.482 --rc geninfo_unexecuted_blocks=1 00:18:40.482 00:18:40.482 ' 00:18:40.482 02:51:51 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:18:40.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.482 --rc genhtml_branch_coverage=1 00:18:40.482 --rc genhtml_function_coverage=1 00:18:40.482 --rc genhtml_legend=1 00:18:40.482 --rc geninfo_all_blocks=1 00:18:40.482 --rc geninfo_unexecuted_blocks=1 00:18:40.482 00:18:40.482 ' 00:18:40.482 02:51:51 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.483 02:51:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:18:40.483 02:51:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:18:40.483 02:51:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.483 02:51:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.483 02:51:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.483 02:51:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.483 02:51:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.483 02:51:51 -- paths/export.sh@5 -- $ export PATH 00:18:40.483 02:51:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.483 02:51:51 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:18:40.483 02:51:51 -- common/autobuild_common.sh@479 -- $ date +%s 00:18:40.483 02:51:51 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733539911.XXXXXX 00:18:40.483 02:51:51 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733539911.a5G5eA 00:18:40.483 02:51:51 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:18:40.483 02:51:51 -- common/autobuild_common.sh@485 -- $ '[' -n v23.11 ']' 00:18:40.483 02:51:51 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:18:40.483 02:51:51 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:18:40.483 02:51:51 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:18:40.483 02:51:51 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:18:40.483 02:51:51 -- common/autobuild_common.sh@495 -- $ get_config_params 00:18:40.483 02:51:51 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:18:40.483 02:51:51 -- common/autotest_common.sh@10 -- $ set +x 00:18:40.483 02:51:51 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:18:40.483 02:51:51 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:18:40.483 02:51:51 -- pm/common@17 -- $ local monitor 00:18:40.483 02:51:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:40.483 02:51:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:40.483 02:51:51 -- pm/common@25 -- $ sleep 1 00:18:40.483 02:51:51 -- pm/common@21 -- $ date +%s 00:18:40.483 02:51:51 -- pm/common@21 -- $ date +%s 00:18:40.483 02:51:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733539911 00:18:40.483 02:51:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733539911 00:18:40.483 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733539911_collect-cpu-load.pm.log 00:18:40.483 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733539911_collect-vmstat.pm.log 00:18:41.423 02:51:52 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:18:41.423 02:51:52 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:18:41.423 02:51:52 -- spdk/autopackage.sh@14 -- $ timing_finish 00:18:41.423 02:51:52 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:18:41.423 02:51:52 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:18:41.423 02:51:52 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:41.423 02:51:52 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:18:41.423 02:51:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:18:41.423 02:51:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:18:41.423 02:51:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:41.423 02:51:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:18:41.423 02:51:52 -- pm/common@44 -- $ pid=102705 00:18:41.423 02:51:52 -- pm/common@50 -- $ kill -TERM 102705 00:18:41.423 02:51:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:18:41.423 02:51:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:18:41.423 02:51:52 -- pm/common@44 -- $ pid=102707 00:18:41.423 02:51:52 -- pm/common@50 -- $ kill -TERM 102707 00:18:41.423 + [[ -n 6173 ]] 00:18:41.423 + sudo kill 6173 00:18:41.433 [Pipeline] } 00:18:41.449 [Pipeline] // timeout 00:18:41.455 [Pipeline] } 00:18:41.469 [Pipeline] // stage 00:18:41.475 [Pipeline] } 00:18:41.489 [Pipeline] // catchError 00:18:41.499 [Pipeline] stage 00:18:41.501 [Pipeline] { (Stop VM) 00:18:41.513 [Pipeline] sh 00:18:41.796 + vagrant halt 00:18:44.329 ==> default: Halting domain... 00:18:52.470 [Pipeline] sh 00:18:52.751 + vagrant destroy -f 00:18:55.289 ==> default: Removing domain... 00:18:55.302 [Pipeline] sh 00:18:55.608 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:18:55.636 [Pipeline] } 00:18:55.651 [Pipeline] // stage 00:18:55.657 [Pipeline] } 00:18:55.672 [Pipeline] // dir 00:18:55.677 [Pipeline] } 00:18:55.692 [Pipeline] // wrap 00:18:55.698 [Pipeline] } 00:18:55.710 [Pipeline] // catchError 00:18:55.719 [Pipeline] stage 00:18:55.721 [Pipeline] { (Epilogue) 00:18:55.733 [Pipeline] sh 00:18:56.016 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:00.225 [Pipeline] catchError 00:19:00.227 [Pipeline] { 00:19:00.240 [Pipeline] sh 00:19:00.525 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:00.525 Artifacts sizes are good 00:19:00.534 [Pipeline] } 00:19:00.548 [Pipeline] // catchError 00:19:00.559 [Pipeline] archiveArtifacts 00:19:00.566 Archiving artifacts 00:19:00.663 [Pipeline] cleanWs 00:19:00.675 [WS-CLEANUP] Deleting project workspace... 00:19:00.675 [WS-CLEANUP] Deferred wipeout is used... 00:19:00.681 [WS-CLEANUP] done 00:19:00.683 [Pipeline] } 00:19:00.699 [Pipeline] // stage 00:19:00.704 [Pipeline] } 00:19:00.718 [Pipeline] // node 00:19:00.723 [Pipeline] End of Pipeline 00:19:00.763 Finished: SUCCESS